Feb 13 20:09:50.150176 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:09:50.150232 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:09:50.150256 kernel: BIOS-provided physical RAM map: Feb 13 20:09:50.150273 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 20:09:50.150289 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 20:09:50.150307 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 20:09:50.150327 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 20:09:50.150350 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 20:09:50.150371 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 13 20:09:50.150389 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Feb 13 20:09:50.150403 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Feb 13 20:09:50.150419 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Feb 13 20:09:50.150433 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 20:09:50.150448 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 20:09:50.150476 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 20:09:50.150496 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 20:09:50.150516 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 20:09:50.150536 kernel: NX (Execute Disable) protection: active Feb 13 20:09:50.150555 kernel: APIC: Static calls initialized Feb 13 20:09:50.150575 kernel: efi: EFI v2.7 by EDK II Feb 13 20:09:50.150598 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Feb 13 20:09:50.150614 kernel: SMBIOS 2.4 present. Feb 13 20:09:50.150630 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 20:09:50.150646 kernel: Hypervisor detected: KVM Feb 13 20:09:50.150667 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:09:50.150687 kernel: kvm-clock: using sched offset of 12705538042 cycles Feb 13 20:09:50.150708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:09:50.150730 kernel: tsc: Detected 2299.998 MHz processor Feb 13 20:09:50.150749 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:09:50.150771 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:09:50.150791 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 20:09:50.150812 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 20:09:50.150829 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:09:50.150849 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 20:09:50.150865 kernel: Using GB pages for direct mapping Feb 13 20:09:50.150882 kernel: Secure boot disabled Feb 13 20:09:50.150899 kernel: ACPI: Early table checksum verification disabled Feb 13 20:09:50.150919 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 20:09:50.150938 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 20:09:50.150959 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 20:09:50.150988 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 20:09:50.151013 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 20:09:50.151033 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 20:09:50.151055 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 20:09:50.151085 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 20:09:50.151103 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 20:09:50.151120 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 20:09:50.151156 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 20:09:50.151177 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 20:09:50.151199 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 20:09:50.151221 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 20:09:50.151242 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 20:09:50.151266 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 20:09:50.151283 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 20:09:50.151302 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 20:09:50.151326 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 20:09:50.151349 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 20:09:50.151369 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:09:50.151391 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:09:50.151415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:09:50.151437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 20:09:50.151459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 20:09:50.151483 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 20:09:50.151508 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 20:09:50.151530 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 20:09:50.151559 kernel: Zone ranges: Feb 13 20:09:50.151579 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:09:50.151600 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:09:50.151621 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:09:50.151644 kernel: Movable zone start for each node Feb 13 20:09:50.151669 kernel: Early memory node ranges Feb 13 20:09:50.151692 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 20:09:50.151715 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 20:09:50.151743 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 13 20:09:50.151769 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 20:09:50.151791 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:09:50.151813 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 20:09:50.151838 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:09:50.151861 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 20:09:50.151882 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 20:09:50.151905 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 20:09:50.151930 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 20:09:50.151956 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:09:50.151983 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:09:50.152008 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:09:50.152035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:09:50.152057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:09:50.152092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:09:50.152126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:09:50.152159 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:09:50.152184 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:09:50.152203 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 20:09:50.152228 kernel: Booting paravirtualized kernel on KVM Feb 13 20:09:50.152253 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:09:50.152278 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:09:50.152302 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:09:50.152328 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:09:50.152349 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:09:50.152368 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:09:50.152390 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:09:50.152415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:09:50.152443 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:09:50.152467 kernel: random: crng init done Feb 13 20:09:50.152491 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:09:50.152516 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:09:50.152537 kernel: Fallback order for Node 0: 0 Feb 13 20:09:50.152563 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Feb 13 20:09:50.152590 kernel: Policy zone: Normal Feb 13 20:09:50.152614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:09:50.152643 kernel: software IO TLB: area num 2. Feb 13 20:09:50.152670 kernel: Memory: 7513396K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 346928K reserved, 0K cma-reserved) Feb 13 20:09:50.152696 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:09:50.152721 kernel: Kernel/User page tables isolation: enabled Feb 13 20:09:50.152748 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:09:50.152770 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:09:50.152792 kernel: Dynamic Preempt: voluntary Feb 13 20:09:50.152813 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:09:50.152835 kernel: rcu: RCU event tracing is enabled. Feb 13 20:09:50.152879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:09:50.152901 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:09:50.152925 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:09:50.152950 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:09:50.152975 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:09:50.152994 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:09:50.153013 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:09:50.153033 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:09:50.153054 kernel: Console: colour dummy device 80x25 Feb 13 20:09:50.153095 kernel: printk: console [ttyS0] enabled Feb 13 20:09:50.153115 kernel: ACPI: Core revision 20230628 Feb 13 20:09:50.153144 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:09:50.153164 kernel: x2apic enabled Feb 13 20:09:50.153184 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:09:50.153204 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 20:09:50.153224 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:09:50.153244 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 20:09:50.153268 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 20:09:50.153288 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 20:09:50.153309 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:09:50.153343 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:09:50.153362 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:09:50.153382 kernel: Spectre V2 : Mitigation: IBRS Feb 13 20:09:50.153402 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:09:50.153421 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:09:50.153441 kernel: RETBleed: Mitigation: IBRS Feb 13 20:09:50.153466 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:09:50.153486 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 20:09:50.153517 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:09:50.153537 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:09:50.153556 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:09:50.153576 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:09:50.153596 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:09:50.153616 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:09:50.153637 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:09:50.153662 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:09:50.153683 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:09:50.153703 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:09:50.153723 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:09:50.153742 kernel: landlock: Up and running. Feb 13 20:09:50.153762 kernel: SELinux: Initializing. Feb 13 20:09:50.153783 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:09:50.153803 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:09:50.153824 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 20:09:50.153847 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:09:50.153867 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:09:50.153886 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:09:50.153907 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 20:09:50.153928 kernel: signal: max sigframe size: 1776 Feb 13 20:09:50.153948 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:09:50.153968 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:09:50.153988 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:09:50.154012 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:09:50.154032 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:09:50.154051 kernel: .... node #0, CPUs: #1 Feb 13 20:09:50.154087 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:09:50.154121 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:09:50.154152 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:09:50.154166 kernel: smpboot: Max logical packages: 1 Feb 13 20:09:50.154183 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 20:09:50.154201 kernel: devtmpfs: initialized Feb 13 20:09:50.154224 kernel: x86/mm: Memory block size: 128MB Feb 13 20:09:50.154247 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 20:09:50.154270 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:09:50.154291 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:09:50.154311 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:09:50.154331 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:09:50.154350 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:09:50.154370 kernel: audit: type=2000 audit(1739477388.760:1): state=initialized audit_enabled=0 res=1 Feb 13 20:09:50.154391 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:09:50.154416 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:09:50.154436 kernel: cpuidle: using governor menu Feb 13 20:09:50.154456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:09:50.154475 kernel: dca service started, version 1.12.1 Feb 13 20:09:50.154495 kernel: PCI: Using configuration type 1 for base access Feb 13 20:09:50.154516 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:09:50.154536 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:09:50.154556 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:09:50.154576 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:09:50.154600 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:09:50.154621 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:09:50.154641 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:09:50.154661 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:09:50.154682 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:09:50.154702 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:09:50.154722 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:09:50.154742 kernel: ACPI: Interpreter enabled Feb 13 20:09:50.154762 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:09:50.154786 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:09:50.154806 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:09:50.154826 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:09:50.154846 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:09:50.154866 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:09:50.155169 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:09:50.155401 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:09:50.155614 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:09:50.155639 kernel: PCI host bridge to bus 0000:00 Feb 13 20:09:50.155848 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:09:50.156039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:09:50.156281 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:09:50.156490 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 20:09:50.156705 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:09:50.156980 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:09:50.157259 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 20:09:50.157513 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:09:50.157746 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:09:50.157990 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 20:09:50.158254 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 20:09:50.158497 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 20:09:50.158718 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:09:50.158938 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 20:09:50.159198 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 20:09:50.159459 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:09:50.159703 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 20:09:50.160391 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 20:09:50.160432 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:09:50.160454 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:09:50.160475 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:09:50.160495 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:09:50.160515 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:09:50.160535 kernel: iommu: Default domain type: Translated Feb 13 20:09:50.160556 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:09:50.160578 kernel: efivars: Registered efivars operations Feb 13 20:09:50.160598 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:09:50.160622 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:09:50.160642 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 20:09:50.160662 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 20:09:50.160682 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 20:09:50.160701 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 20:09:50.160721 kernel: vgaarb: loaded Feb 13 20:09:50.160741 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:09:50.160762 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:09:50.160783 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:09:50.160807 kernel: pnp: PnP ACPI init Feb 13 20:09:50.160828 kernel: pnp: PnP ACPI: found 7 devices Feb 13 20:09:50.160848 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:09:50.160869 kernel: NET: Registered PF_INET protocol family Feb 13 20:09:50.160889 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:09:50.160909 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:09:50.160929 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:09:50.160950 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:09:50.160971 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:09:50.160995 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:09:50.161017 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:09:50.161037 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:09:50.161057 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:09:50.161113 kernel: NET: Registered PF_XDP protocol family Feb 13 20:09:50.161334 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:09:50.161528 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:09:50.161764 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:09:50.161961 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 20:09:50.162935 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:09:50.162975 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:09:50.162999 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:09:50.163023 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 20:09:50.163045 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:09:50.163149 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:09:50.163175 kernel: clocksource: Switched to clocksource tsc Feb 13 20:09:50.163203 kernel: Initialise system trusted keyrings Feb 13 20:09:50.163227 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:09:50.163251 kernel: Key type asymmetric registered Feb 13 20:09:50.163272 kernel: Asymmetric key parser 'x509' registered Feb 13 20:09:50.163297 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:09:50.163322 kernel: io scheduler mq-deadline registered Feb 13 20:09:50.163345 kernel: io scheduler kyber registered Feb 13 20:09:50.163368 kernel: io scheduler bfq registered Feb 13 20:09:50.163391 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:09:50.163422 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:09:50.163689 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 20:09:50.163721 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 20:09:50.164482 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 20:09:50.164522 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:09:50.164763 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 20:09:50.164794 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:09:50.164818 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:09:50.164842 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:09:50.164874 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 20:09:50.164895 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 20:09:50.165175 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 20:09:50.165207 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:09:50.165232 kernel: i8042: Warning: Keylock active Feb 13 20:09:50.165255 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:09:50.165278 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:09:50.165508 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:09:50.165722 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:09:50.165921 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:09:49 UTC (1739477389) Feb 13 20:09:50.168213 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:09:50.168250 kernel: intel_pstate: CPU model not supported Feb 13 20:09:50.168276 kernel: pstore: Using crash dump compression: deflate Feb 13 20:09:50.168299 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:09:50.168323 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:09:50.168354 kernel: Segment Routing with IPv6 Feb 13 20:09:50.168378 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:09:50.168402 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:09:50.168423 kernel: Key type dns_resolver registered Feb 13 20:09:50.168443 kernel: IPI shorthand broadcast: enabled Feb 13 20:09:50.168465 kernel: sched_clock: Marking stable (946005004, 176386024)->(1195338433, -72947405) Feb 13 20:09:50.168487 kernel: registered taskstats version 1 Feb 13 20:09:50.168511 kernel: Loading compiled-in X.509 certificates Feb 13 20:09:50.168533 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:09:50.168555 kernel: Key type .fscrypt registered Feb 13 20:09:50.168585 kernel: Key type fscrypt-provisioning registered Feb 13 20:09:50.168607 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:09:50.168631 kernel: ima: No architecture policies found Feb 13 20:09:50.168652 kernel: clk: Disabling unused clocks Feb 13 20:09:50.168673 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 20:09:50.168693 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:09:50.168715 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:09:50.168737 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:09:50.168764 kernel: Run /init as init process Feb 13 20:09:50.168788 kernel: with arguments: Feb 13 20:09:50.168810 kernel: /init Feb 13 20:09:50.168834 kernel: with environment: Feb 13 20:09:50.168856 kernel: HOME=/ Feb 13 20:09:50.168876 kernel: TERM=linux Feb 13 20:09:50.168894 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:09:50.168918 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:09:50.168949 systemd[1]: Detected virtualization google. Feb 13 20:09:50.168971 systemd[1]: Detected architecture x86-64. Feb 13 20:09:50.168992 systemd[1]: Running in initrd. Feb 13 20:09:50.169015 systemd[1]: No hostname configured, using default hostname. Feb 13 20:09:50.169036 systemd[1]: Hostname set to . Feb 13 20:09:50.169059 systemd[1]: Initializing machine ID from random generator. Feb 13 20:09:50.169103 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:09:50.169125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:09:50.169160 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:09:50.169183 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:09:50.169204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:09:50.169227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:09:50.169250 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:09:50.169274 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:09:50.169301 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:09:50.169323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:09:50.169348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:09:50.169391 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:09:50.169417 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:09:50.169440 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:09:50.169465 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:09:50.169491 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:09:50.169515 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:09:50.169537 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:09:50.169562 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:09:50.169586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:09:50.169608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:09:50.169631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:09:50.169654 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:09:50.169681 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:09:50.169703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:09:50.169726 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:09:50.169748 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:09:50.169771 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:09:50.169794 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:09:50.169855 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:09:50.169909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:09:50.169933 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:09:50.169955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:09:50.169978 systemd-journald[183]: Journal started Feb 13 20:09:50.170027 systemd-journald[183]: Runtime Journal (/run/log/journal/16aed20ed07c48879b15f5e103ad8c40) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:09:50.179172 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:09:50.178046 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:09:50.190862 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:09:50.193348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:09:50.209903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:09:50.213779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:09:50.233650 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:09:50.235152 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:09:50.239669 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:09:50.245284 kernel: Bridge firewalling registered Feb 13 20:09:50.242630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:09:50.255654 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:09:50.264403 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:09:50.267456 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:09:50.284493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:09:50.303753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:09:50.316577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:09:50.321639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:09:50.332678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:09:50.349431 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:09:50.373577 systemd-resolved[213]: Positive Trust Anchors: Feb 13 20:09:50.374213 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:09:50.374606 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:09:50.382745 systemd-resolved[213]: Defaulting to hostname 'linux'. Feb 13 20:09:50.402584 dracut-cmdline[217]: dracut-dracut-053 Feb 13 20:09:50.402584 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:09:50.388685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:09:50.398945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:09:50.496116 kernel: SCSI subsystem initialized Feb 13 20:09:50.508123 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:09:50.521101 kernel: iscsi: registered transport (tcp) Feb 13 20:09:50.547352 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:09:50.547445 kernel: QLogic iSCSI HBA Driver Feb 13 20:09:50.605945 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:09:50.613336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:09:50.665115 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:09:50.665226 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:09:50.665260 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:09:50.713114 kernel: raid6: avx2x4 gen() 17753 MB/s Feb 13 20:09:50.730107 kernel: raid6: avx2x2 gen() 17793 MB/s Feb 13 20:09:50.747563 kernel: raid6: avx2x1 gen() 13873 MB/s Feb 13 20:09:50.747628 kernel: raid6: using algorithm avx2x2 gen() 17793 MB/s Feb 13 20:09:50.765595 kernel: raid6: .... xor() 17451 MB/s, rmw enabled Feb 13 20:09:50.765661 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:09:50.791123 kernel: xor: automatically using best checksumming function avx Feb 13 20:09:50.988101 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:09:51.004338 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:09:51.009364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:09:51.053934 systemd-udevd[399]: Using default interface naming scheme 'v255'. Feb 13 20:09:51.062149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:09:51.072318 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:09:51.098834 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:09:51.140599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:09:51.153353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:09:51.262708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:09:51.285399 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:09:51.345541 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:09:51.366516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:09:51.400171 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:09:51.402239 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:09:51.435277 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:09:51.422028 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:09:51.457603 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 20:09:51.511166 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:09:51.511996 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:09:51.528235 kernel: AES CTR mode by8 optimization enabled Feb 13 20:09:51.548333 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:09:51.655022 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 20:09:51.665580 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 20:09:51.666615 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 20:09:51.666905 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 20:09:51.667221 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:09:51.667527 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:09:51.667565 kernel: GPT:17805311 != 25165823 Feb 13 20:09:51.667596 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:09:51.667622 kernel: GPT:17805311 != 25165823 Feb 13 20:09:51.667652 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:09:51.667682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:51.667713 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 20:09:51.548598 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:09:51.560446 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:09:51.571180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:09:51.571464 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:09:51.583515 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:09:51.666030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:09:51.677411 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:09:51.738101 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (445) Feb 13 20:09:51.764097 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Feb 13 20:09:51.782807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:09:51.812588 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 20:09:51.831603 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 20:09:51.852287 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 20:09:51.868257 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 20:09:51.896266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:09:51.922487 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:09:51.938989 disk-uuid[538]: Primary Header is updated. Feb 13 20:09:51.938989 disk-uuid[538]: Secondary Entries is updated. Feb 13 20:09:51.938989 disk-uuid[538]: Secondary Header is updated. Feb 13 20:09:51.975252 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:51.960825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:09:52.007278 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:52.007379 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:52.047617 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:09:53.001101 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:53.001217 disk-uuid[539]: The operation has completed successfully. Feb 13 20:09:53.098987 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:09:53.099201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:09:53.125365 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:09:53.157975 sh[566]: Success Feb 13 20:09:53.183102 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:09:53.291732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:09:53.299951 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:09:53.330591 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:09:53.372133 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:09:53.372235 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:09:53.372271 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:09:53.379635 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:09:53.386467 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:09:53.420139 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:09:53.426213 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:09:53.441212 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:09:53.447457 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:09:53.484293 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:09:53.532056 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:53.532122 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:09:53.532163 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:09:53.532194 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:09:53.532224 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:09:53.544861 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:09:53.561289 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:53.570907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:09:53.597446 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:09:53.727284 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:09:53.774990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:09:53.801181 ignition[654]: Ignition 2.19.0 Feb 13 20:09:53.801207 ignition[654]: Stage: fetch-offline Feb 13 20:09:53.805640 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:09:53.801274 ignition[654]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:53.833415 systemd-networkd[754]: lo: Link UP Feb 13 20:09:53.801294 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:53.833421 systemd-networkd[754]: lo: Gained carrier Feb 13 20:09:53.801459 ignition[654]: parsed url from cmdline: "" Feb 13 20:09:53.835416 systemd-networkd[754]: Enumeration completed Feb 13 20:09:53.801469 ignition[654]: no config URL provided Feb 13 20:09:53.836127 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:09:53.801480 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:09:53.836136 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:09:53.801497 ignition[654]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:09:53.838793 systemd-networkd[754]: eth0: Link UP Feb 13 20:09:53.801510 ignition[654]: failed to fetch config: resource requires networking Feb 13 20:09:53.838798 systemd-networkd[754]: eth0: Gained carrier Feb 13 20:09:53.801863 ignition[654]: Ignition finished successfully Feb 13 20:09:53.838809 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:09:53.932758 ignition[760]: Ignition 2.19.0 Feb 13 20:09:53.849653 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:09:53.932770 ignition[760]: Stage: fetch Feb 13 20:09:53.851204 systemd-networkd[754]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:09:53.933022 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:53.866750 systemd[1]: Reached target network.target - Network. Feb 13 20:09:53.933036 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:53.891917 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:09:53.933232 ignition[760]: parsed url from cmdline: "" Feb 13 20:09:53.945864 unknown[760]: fetched base config from "system" Feb 13 20:09:53.933239 ignition[760]: no config URL provided Feb 13 20:09:53.945879 unknown[760]: fetched base config from "system" Feb 13 20:09:53.933249 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:09:53.945899 unknown[760]: fetched user config from "gcp" Feb 13 20:09:53.933264 ignition[760]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:09:53.953746 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:09:53.933296 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 20:09:53.977333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:09:53.938804 ignition[760]: GET result: OK Feb 13 20:09:54.021409 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:09:53.938943 ignition[760]: parsing config with SHA512: b1a8d88b27fbecbf1418572c0db909f058e010804d6103ad56b73b3e4dc3769fafbe1c2a25600b79710de07780fcd039aae1fd53b80aba2fe8f3c7698dba24f0 Feb 13 20:09:54.047335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:09:53.947046 ignition[760]: fetch: fetch complete Feb 13 20:09:54.096233 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:09:53.947058 ignition[760]: fetch: fetch passed Feb 13 20:09:54.121251 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:09:53.947167 ignition[760]: Ignition finished successfully Feb 13 20:09:54.141308 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:09:54.018444 ignition[767]: Ignition 2.19.0 Feb 13 20:09:54.158291 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:09:54.018454 ignition[767]: Stage: kargs Feb 13 20:09:54.172292 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:09:54.018696 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:54.188307 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:09:54.018710 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:54.213327 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:09:54.019865 ignition[767]: kargs: kargs passed Feb 13 20:09:54.019928 ignition[767]: Ignition finished successfully Feb 13 20:09:54.093051 ignition[773]: Ignition 2.19.0 Feb 13 20:09:54.093061 ignition[773]: Stage: disks Feb 13 20:09:54.093542 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:54.093558 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:54.094592 ignition[773]: disks: disks passed Feb 13 20:09:54.094649 ignition[773]: Ignition finished successfully Feb 13 20:09:54.277844 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:09:54.422412 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:09:54.449270 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:09:54.603242 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:09:54.604306 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:09:54.605295 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:09:54.625239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:09:54.654400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:09:54.663944 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:09:54.722307 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (790) Feb 13 20:09:54.722371 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:54.722404 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:09:54.722455 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:09:54.664042 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:09:54.764279 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:09:54.764333 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:09:54.664110 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:09:54.748586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:09:54.772586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:09:54.796348 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:09:54.928148 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:09:54.937239 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:09:54.948229 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:09:54.958358 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:09:55.109948 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:09:55.127267 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:09:55.131416 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:09:55.169592 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:09:55.185263 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:55.205051 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:09:55.220472 ignition[906]: INFO : Ignition 2.19.0 Feb 13 20:09:55.220472 ignition[906]: INFO : Stage: mount Feb 13 20:09:55.246276 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:55.246276 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:55.246276 ignition[906]: INFO : mount: mount passed Feb 13 20:09:55.246276 ignition[906]: INFO : Ignition finished successfully Feb 13 20:09:55.225779 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:09:55.239449 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:09:55.363274 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Feb 13 20:09:55.363329 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:55.363354 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:09:55.363379 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:09:55.363414 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:09:55.363442 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:09:55.287396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:09:55.363266 systemd-networkd[754]: eth0: Gained IPv6LL Feb 13 20:09:55.365329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:09:55.425445 ignition[933]: INFO : Ignition 2.19.0 Feb 13 20:09:55.425445 ignition[933]: INFO : Stage: files Feb 13 20:09:55.440251 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:55.440251 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:55.440251 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:09:55.440251 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:09:55.440251 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:09:55.440251 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:09:55.440251 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:09:55.440251 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:09:55.438513 unknown[933]: wrote ssh authorized keys file for user: core Feb 13 20:09:55.542218 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:09:55.542218 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:09:55.602331 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:09:55.942491 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:09:56.247699 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:09:56.770116 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:09:56.789267 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:09:56.789267 ignition[933]: INFO : files: files passed Feb 13 20:09:56.789267 ignition[933]: INFO : Ignition finished successfully Feb 13 20:09:56.776770 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:09:56.805506 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:09:56.860335 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:09:56.865833 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:09:57.024382 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:09:57.024382 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:09:56.865971 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:09:57.074306 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:09:56.943032 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:09:56.955403 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:09:56.979307 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:09:57.042859 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:09:57.043000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:09:57.065223 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:09:57.084500 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:09:57.108563 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:09:57.115364 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:09:57.182393 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:09:57.207342 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:09:57.246255 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:09:57.259675 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:09:57.270715 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:09:57.290755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:09:57.290984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:09:57.325675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:09:57.334712 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:09:57.352700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:09:57.369723 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:09:57.386752 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:09:57.424674 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:09:57.451474 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:09:57.473467 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:09:57.490500 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:09:57.507476 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:09:57.522386 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:09:57.522860 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:09:57.549592 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:09:57.550043 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:09:57.568684 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:09:57.568909 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:09:57.588707 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:09:57.588940 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:09:57.627750 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:09:57.628036 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:09:57.636789 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:09:57.637011 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:09:57.663682 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:09:57.705225 ignition[987]: INFO : Ignition 2.19.0 Feb 13 20:09:57.705225 ignition[987]: INFO : Stage: umount Feb 13 20:09:57.754281 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:57.754281 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:57.754281 ignition[987]: INFO : umount: umount passed Feb 13 20:09:57.754281 ignition[987]: INFO : Ignition finished successfully Feb 13 20:09:57.726630 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:09:57.735269 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:09:57.735624 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:09:57.747574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:09:57.747887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:09:57.802865 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:09:57.804296 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:09:57.804443 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:09:57.819286 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:09:57.819423 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:09:57.840954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:09:57.841121 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:09:57.850914 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:09:57.850996 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:09:57.876538 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:09:57.876628 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:09:57.886578 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:09:57.886658 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:09:57.902615 systemd[1]: Stopped target network.target - Network. Feb 13 20:09:57.919517 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:09:57.919609 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:09:57.935573 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:09:57.953545 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:09:57.957211 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:09:57.979277 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:09:57.995285 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:09:58.013358 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:09:58.013458 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:09:58.031365 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:09:58.031472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:09:58.049352 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:09:58.049530 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:09:58.068389 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:09:58.068511 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:09:58.087408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:09:58.087588 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:09:58.105651 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:09:58.110184 systemd-networkd[754]: eth0: DHCPv6 lease lost Feb 13 20:09:58.124504 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:09:58.143035 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:09:58.143233 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:09:58.172512 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:09:58.172832 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:09:58.181352 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:09:58.181418 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:09:58.205247 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:09:58.240244 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:09:58.240520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:09:58.266521 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:09:58.266608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:09:58.284575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:09:58.284666 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:09:58.292568 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:09:58.292648 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:09:58.309774 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:09:58.328027 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:09:58.328267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:09:58.361537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:09:58.361685 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:09:58.383538 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:09:58.770276 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:09:58.383608 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:09:58.393547 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:09:58.393633 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:09:58.429507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:09:58.429629 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:09:58.473283 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:09:58.473554 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:09:58.509310 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:09:58.531243 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:09:58.531389 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:09:58.549370 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:09:58.549488 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:09:58.570356 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:09:58.570472 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:09:58.592342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:09:58.592477 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:09:58.613053 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:09:58.613250 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:09:58.633816 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:09:58.633952 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:09:58.655753 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:09:58.671307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:09:58.717746 systemd[1]: Switching root. Feb 13 20:09:59.002257 systemd-journald[183]: Journal stopped Feb 13 20:09:50.150176 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:09:50.150232 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:09:50.150256 kernel: BIOS-provided physical RAM map: Feb 13 20:09:50.150273 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 20:09:50.150289 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 20:09:50.150307 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 20:09:50.150327 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 20:09:50.150350 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 20:09:50.150371 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 13 20:09:50.150389 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Feb 13 20:09:50.150403 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Feb 13 20:09:50.150419 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Feb 13 20:09:50.150433 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 20:09:50.150448 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 20:09:50.150476 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 20:09:50.150496 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 20:09:50.150516 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 20:09:50.150536 kernel: NX (Execute Disable) protection: active Feb 13 20:09:50.150555 kernel: APIC: Static calls initialized Feb 13 20:09:50.150575 kernel: efi: EFI v2.7 by EDK II Feb 13 20:09:50.150598 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Feb 13 20:09:50.150614 kernel: SMBIOS 2.4 present. Feb 13 20:09:50.150630 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 20:09:50.150646 kernel: Hypervisor detected: KVM Feb 13 20:09:50.150667 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:09:50.150687 kernel: kvm-clock: using sched offset of 12705538042 cycles Feb 13 20:09:50.150708 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:09:50.150730 kernel: tsc: Detected 2299.998 MHz processor Feb 13 20:09:50.150749 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:09:50.150771 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:09:50.150791 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 20:09:50.150812 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 20:09:50.150829 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:09:50.150849 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 20:09:50.150865 kernel: Using GB pages for direct mapping Feb 13 20:09:50.150882 kernel: Secure boot disabled Feb 13 20:09:50.150899 kernel: ACPI: Early table checksum verification disabled Feb 13 20:09:50.150919 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 20:09:50.150938 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 20:09:50.150959 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 20:09:50.150988 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 20:09:50.151013 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 20:09:50.151033 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 20:09:50.151055 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 20:09:50.151085 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 20:09:50.151103 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 20:09:50.151120 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 20:09:50.151156 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 20:09:50.151177 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 20:09:50.151199 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 20:09:50.151221 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 20:09:50.151242 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 20:09:50.151266 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 20:09:50.151283 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 20:09:50.151302 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 20:09:50.151326 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 20:09:50.151349 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 20:09:50.151369 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:09:50.151391 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:09:50.151415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:09:50.151437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 20:09:50.151459 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 20:09:50.151483 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 20:09:50.151508 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 20:09:50.151530 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 20:09:50.151559 kernel: Zone ranges: Feb 13 20:09:50.151579 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:09:50.151600 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:09:50.151621 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:09:50.151644 kernel: Movable zone start for each node Feb 13 20:09:50.151669 kernel: Early memory node ranges Feb 13 20:09:50.151692 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 20:09:50.151715 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 20:09:50.151743 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 13 20:09:50.151769 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 20:09:50.151791 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:09:50.151813 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 20:09:50.151838 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:09:50.151861 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 20:09:50.151882 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 20:09:50.151905 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 20:09:50.151930 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 20:09:50.151956 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:09:50.151983 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:09:50.152008 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:09:50.152035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:09:50.152057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:09:50.152092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:09:50.152126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:09:50.152159 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:09:50.152184 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:09:50.152203 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 20:09:50.152228 kernel: Booting paravirtualized kernel on KVM Feb 13 20:09:50.152253 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:09:50.152278 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:09:50.152302 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:09:50.152328 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:09:50.152349 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:09:50.152368 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:09:50.152390 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:09:50.152415 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:09:50.152443 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:09:50.152467 kernel: random: crng init done Feb 13 20:09:50.152491 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:09:50.152516 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:09:50.152537 kernel: Fallback order for Node 0: 0 Feb 13 20:09:50.152563 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Feb 13 20:09:50.152590 kernel: Policy zone: Normal Feb 13 20:09:50.152614 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:09:50.152643 kernel: software IO TLB: area num 2. Feb 13 20:09:50.152670 kernel: Memory: 7513396K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 346928K reserved, 0K cma-reserved) Feb 13 20:09:50.152696 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:09:50.152721 kernel: Kernel/User page tables isolation: enabled Feb 13 20:09:50.152748 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:09:50.152770 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:09:50.152792 kernel: Dynamic Preempt: voluntary Feb 13 20:09:50.152813 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:09:50.152835 kernel: rcu: RCU event tracing is enabled. Feb 13 20:09:50.152879 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:09:50.152901 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:09:50.152925 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:09:50.152950 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:09:50.152975 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:09:50.152994 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:09:50.153013 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:09:50.153033 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:09:50.153054 kernel: Console: colour dummy device 80x25 Feb 13 20:09:50.153095 kernel: printk: console [ttyS0] enabled Feb 13 20:09:50.153115 kernel: ACPI: Core revision 20230628 Feb 13 20:09:50.153144 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:09:50.153164 kernel: x2apic enabled Feb 13 20:09:50.153184 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:09:50.153204 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 20:09:50.153224 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:09:50.153244 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 20:09:50.153268 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 20:09:50.153288 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 20:09:50.153309 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:09:50.153343 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:09:50.153362 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:09:50.153382 kernel: Spectre V2 : Mitigation: IBRS Feb 13 20:09:50.153402 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:09:50.153421 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:09:50.153441 kernel: RETBleed: Mitigation: IBRS Feb 13 20:09:50.153466 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:09:50.153486 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 20:09:50.153517 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:09:50.153537 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:09:50.153556 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:09:50.153576 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:09:50.153596 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:09:50.153616 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:09:50.153637 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:09:50.153662 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:09:50.153683 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:09:50.153703 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:09:50.153723 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:09:50.153742 kernel: landlock: Up and running. Feb 13 20:09:50.153762 kernel: SELinux: Initializing. Feb 13 20:09:50.153783 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:09:50.153803 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:09:50.153824 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 20:09:50.153847 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:09:50.153867 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:09:50.153886 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:09:50.153907 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 20:09:50.153928 kernel: signal: max sigframe size: 1776 Feb 13 20:09:50.153948 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:09:50.153968 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:09:50.153988 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:09:50.154012 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:09:50.154032 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:09:50.154051 kernel: .... node #0, CPUs: #1 Feb 13 20:09:50.154087 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:09:50.154121 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:09:50.154152 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:09:50.154166 kernel: smpboot: Max logical packages: 1 Feb 13 20:09:50.154183 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 20:09:50.154201 kernel: devtmpfs: initialized Feb 13 20:09:50.154224 kernel: x86/mm: Memory block size: 128MB Feb 13 20:09:50.154247 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 20:09:50.154270 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:09:50.154291 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:09:50.154311 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:09:50.154331 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:09:50.154350 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:09:50.154370 kernel: audit: type=2000 audit(1739477388.760:1): state=initialized audit_enabled=0 res=1 Feb 13 20:09:50.154391 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:09:50.154416 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:09:50.154436 kernel: cpuidle: using governor menu Feb 13 20:09:50.154456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:09:50.154475 kernel: dca service started, version 1.12.1 Feb 13 20:09:50.154495 kernel: PCI: Using configuration type 1 for base access Feb 13 20:09:50.154516 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:09:50.154536 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:09:50.154556 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:09:50.154576 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:09:50.154600 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:09:50.154621 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:09:50.154641 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:09:50.154661 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:09:50.154682 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:09:50.154702 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:09:50.154722 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:09:50.154742 kernel: ACPI: Interpreter enabled Feb 13 20:09:50.154762 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:09:50.154786 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:09:50.154806 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:09:50.154826 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:09:50.154846 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:09:50.154866 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:09:50.155169 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:09:50.155401 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:09:50.155614 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:09:50.155639 kernel: PCI host bridge to bus 0000:00 Feb 13 20:09:50.155848 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:09:50.156039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:09:50.156281 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:09:50.156490 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 20:09:50.156705 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:09:50.156980 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:09:50.157259 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 20:09:50.157513 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:09:50.157746 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:09:50.157990 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 20:09:50.158254 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 20:09:50.158497 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 20:09:50.158718 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:09:50.158938 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 20:09:50.159198 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 20:09:50.159459 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:09:50.159703 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 20:09:50.160391 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 20:09:50.160432 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:09:50.160454 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:09:50.160475 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:09:50.160495 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:09:50.160515 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:09:50.160535 kernel: iommu: Default domain type: Translated Feb 13 20:09:50.160556 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:09:50.160578 kernel: efivars: Registered efivars operations Feb 13 20:09:50.160598 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:09:50.160622 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:09:50.160642 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 20:09:50.160662 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 20:09:50.160682 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 20:09:50.160701 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 20:09:50.160721 kernel: vgaarb: loaded Feb 13 20:09:50.160741 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:09:50.160762 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:09:50.160783 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:09:50.160807 kernel: pnp: PnP ACPI init Feb 13 20:09:50.160828 kernel: pnp: PnP ACPI: found 7 devices Feb 13 20:09:50.160848 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:09:50.160869 kernel: NET: Registered PF_INET protocol family Feb 13 20:09:50.160889 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:09:50.160909 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:09:50.160929 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:09:50.160950 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:09:50.160971 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:09:50.160995 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:09:50.161017 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:09:50.161037 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:09:50.161057 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:09:50.161113 kernel: NET: Registered PF_XDP protocol family Feb 13 20:09:50.161334 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:09:50.161528 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:09:50.161764 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:09:50.161961 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 20:09:50.162935 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:09:50.162975 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:09:50.162999 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:09:50.163023 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 20:09:50.163045 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:09:50.163149 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:09:50.163175 kernel: clocksource: Switched to clocksource tsc Feb 13 20:09:50.163203 kernel: Initialise system trusted keyrings Feb 13 20:09:50.163227 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:09:50.163251 kernel: Key type asymmetric registered Feb 13 20:09:50.163272 kernel: Asymmetric key parser 'x509' registered Feb 13 20:09:50.163297 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:09:50.163322 kernel: io scheduler mq-deadline registered Feb 13 20:09:50.163345 kernel: io scheduler kyber registered Feb 13 20:09:50.163368 kernel: io scheduler bfq registered Feb 13 20:09:50.163391 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:09:50.163422 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:09:50.163689 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 20:09:50.163721 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 20:09:50.164482 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 20:09:50.164522 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:09:50.164763 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 20:09:50.164794 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:09:50.164818 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:09:50.164842 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:09:50.164874 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 20:09:50.164895 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 20:09:50.165175 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 20:09:50.165207 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:09:50.165232 kernel: i8042: Warning: Keylock active Feb 13 20:09:50.165255 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:09:50.165278 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:09:50.165508 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:09:50.165722 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:09:50.165921 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:09:49 UTC (1739477389) Feb 13 20:09:50.168213 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:09:50.168250 kernel: intel_pstate: CPU model not supported Feb 13 20:09:50.168276 kernel: pstore: Using crash dump compression: deflate Feb 13 20:09:50.168299 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:09:50.168323 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:09:50.168354 kernel: Segment Routing with IPv6 Feb 13 20:09:50.168378 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:09:50.168402 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:09:50.168423 kernel: Key type dns_resolver registered Feb 13 20:09:50.168443 kernel: IPI shorthand broadcast: enabled Feb 13 20:09:50.168465 kernel: sched_clock: Marking stable (946005004, 176386024)->(1195338433, -72947405) Feb 13 20:09:50.168487 kernel: registered taskstats version 1 Feb 13 20:09:50.168511 kernel: Loading compiled-in X.509 certificates Feb 13 20:09:50.168533 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:09:50.168555 kernel: Key type .fscrypt registered Feb 13 20:09:50.168585 kernel: Key type fscrypt-provisioning registered Feb 13 20:09:50.168607 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:09:50.168631 kernel: ima: No architecture policies found Feb 13 20:09:50.168652 kernel: clk: Disabling unused clocks Feb 13 20:09:50.168673 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 20:09:50.168693 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:09:50.168715 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:09:50.168737 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:09:50.168764 kernel: Run /init as init process Feb 13 20:09:50.168788 kernel: with arguments: Feb 13 20:09:50.168810 kernel: /init Feb 13 20:09:50.168834 kernel: with environment: Feb 13 20:09:50.168856 kernel: HOME=/ Feb 13 20:09:50.168876 kernel: TERM=linux Feb 13 20:09:50.168894 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:09:50.168918 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:09:50.168949 systemd[1]: Detected virtualization google. Feb 13 20:09:50.168971 systemd[1]: Detected architecture x86-64. Feb 13 20:09:50.168992 systemd[1]: Running in initrd. Feb 13 20:09:50.169015 systemd[1]: No hostname configured, using default hostname. Feb 13 20:09:50.169036 systemd[1]: Hostname set to . Feb 13 20:09:50.169059 systemd[1]: Initializing machine ID from random generator. Feb 13 20:09:50.169103 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:09:50.169125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:09:50.169160 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:09:50.169183 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:09:50.169204 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:09:50.169227 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:09:50.169250 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:09:50.169274 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:09:50.169301 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:09:50.169323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:09:50.169348 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:09:50.169391 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:09:50.169417 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:09:50.169440 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:09:50.169465 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:09:50.169491 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:09:50.169515 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:09:50.169537 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:09:50.169562 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:09:50.169586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:09:50.169608 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:09:50.169631 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:09:50.169654 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:09:50.169681 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:09:50.169703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:09:50.169726 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:09:50.169748 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:09:50.169771 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:09:50.169794 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:09:50.169855 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:09:50.169909 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:09:50.169933 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:09:50.169955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:09:50.169978 systemd-journald[183]: Journal started Feb 13 20:09:50.170027 systemd-journald[183]: Runtime Journal (/run/log/journal/16aed20ed07c48879b15f5e103ad8c40) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:09:50.179172 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:09:50.178046 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:09:50.190862 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:09:50.193348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:09:50.209903 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:09:50.213779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:09:50.233650 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:09:50.235152 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:09:50.239669 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:09:50.245284 kernel: Bridge firewalling registered Feb 13 20:09:50.242630 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:09:50.255654 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:09:50.264403 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:09:50.267456 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:09:50.284493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:09:50.303753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:09:50.316577 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:09:50.321639 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:09:50.332678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:09:50.349431 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:09:50.373577 systemd-resolved[213]: Positive Trust Anchors: Feb 13 20:09:50.374213 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:09:50.374606 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:09:50.382745 systemd-resolved[213]: Defaulting to hostname 'linux'. Feb 13 20:09:50.402584 dracut-cmdline[217]: dracut-dracut-053 Feb 13 20:09:50.402584 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:09:50.388685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:09:50.398945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:09:50.496116 kernel: SCSI subsystem initialized Feb 13 20:09:50.508123 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:09:50.521101 kernel: iscsi: registered transport (tcp) Feb 13 20:09:50.547352 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:09:50.547445 kernel: QLogic iSCSI HBA Driver Feb 13 20:09:50.605945 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:09:50.613336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:09:50.665115 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:09:50.665226 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:09:50.665260 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:09:50.713114 kernel: raid6: avx2x4 gen() 17753 MB/s Feb 13 20:09:50.730107 kernel: raid6: avx2x2 gen() 17793 MB/s Feb 13 20:09:50.747563 kernel: raid6: avx2x1 gen() 13873 MB/s Feb 13 20:09:50.747628 kernel: raid6: using algorithm avx2x2 gen() 17793 MB/s Feb 13 20:09:50.765595 kernel: raid6: .... xor() 17451 MB/s, rmw enabled Feb 13 20:09:50.765661 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:09:50.791123 kernel: xor: automatically using best checksumming function avx Feb 13 20:09:50.988101 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:09:51.004338 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:09:51.009364 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:09:51.053934 systemd-udevd[399]: Using default interface naming scheme 'v255'. Feb 13 20:09:51.062149 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:09:51.072318 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:09:51.098834 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:09:51.140599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:09:51.153353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:09:51.262708 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:09:51.285399 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:09:51.345541 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:09:51.366516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:09:51.400171 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:09:51.402239 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:09:51.435277 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:09:51.422028 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:09:51.457603 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 20:09:51.511166 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:09:51.511996 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:09:51.528235 kernel: AES CTR mode by8 optimization enabled Feb 13 20:09:51.548333 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:09:51.655022 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 20:09:51.665580 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 20:09:51.666615 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 20:09:51.666905 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 20:09:51.667221 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:09:51.667527 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:09:51.667565 kernel: GPT:17805311 != 25165823 Feb 13 20:09:51.667596 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:09:51.667622 kernel: GPT:17805311 != 25165823 Feb 13 20:09:51.667652 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:09:51.667682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:51.667713 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 20:09:51.548598 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:09:51.560446 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:09:51.571180 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:09:51.571464 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:09:51.583515 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:09:51.666030 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:09:51.677411 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:09:51.738101 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (445) Feb 13 20:09:51.764097 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Feb 13 20:09:51.782807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:09:51.812588 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 20:09:51.831603 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 20:09:51.852287 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 20:09:51.868257 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 20:09:51.896266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:09:51.922487 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:09:51.938989 disk-uuid[538]: Primary Header is updated. Feb 13 20:09:51.938989 disk-uuid[538]: Secondary Entries is updated. Feb 13 20:09:51.938989 disk-uuid[538]: Secondary Header is updated. Feb 13 20:09:51.975252 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:51.960825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:09:52.007278 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:52.007379 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:52.047617 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:09:53.001101 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:09:53.001217 disk-uuid[539]: The operation has completed successfully. Feb 13 20:09:53.098987 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:09:53.099201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:09:53.125365 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:09:53.157975 sh[566]: Success Feb 13 20:09:53.183102 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:09:53.291732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:09:53.299951 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:09:53.330591 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:09:53.372133 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:09:53.372235 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:09:53.372271 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:09:53.379635 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:09:53.386467 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:09:53.420139 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:09:53.426213 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:09:53.441212 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:09:53.447457 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:09:53.484293 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:09:53.532056 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:53.532122 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:09:53.532163 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:09:53.532194 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:09:53.532224 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:09:53.544861 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:09:53.561289 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:53.570907 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:09:53.597446 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:09:53.727284 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:09:53.774990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:09:53.801181 ignition[654]: Ignition 2.19.0 Feb 13 20:09:53.801207 ignition[654]: Stage: fetch-offline Feb 13 20:09:53.805640 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:09:53.801274 ignition[654]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:53.833415 systemd-networkd[754]: lo: Link UP Feb 13 20:09:53.801294 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:53.833421 systemd-networkd[754]: lo: Gained carrier Feb 13 20:09:53.801459 ignition[654]: parsed url from cmdline: "" Feb 13 20:09:53.835416 systemd-networkd[754]: Enumeration completed Feb 13 20:09:53.801469 ignition[654]: no config URL provided Feb 13 20:09:53.836127 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:09:53.801480 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:09:53.836136 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:09:53.801497 ignition[654]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:09:53.838793 systemd-networkd[754]: eth0: Link UP Feb 13 20:09:53.801510 ignition[654]: failed to fetch config: resource requires networking Feb 13 20:09:53.838798 systemd-networkd[754]: eth0: Gained carrier Feb 13 20:09:53.801863 ignition[654]: Ignition finished successfully Feb 13 20:09:53.838809 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:09:53.932758 ignition[760]: Ignition 2.19.0 Feb 13 20:09:53.849653 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:09:53.932770 ignition[760]: Stage: fetch Feb 13 20:09:53.851204 systemd-networkd[754]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:09:53.933022 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:53.866750 systemd[1]: Reached target network.target - Network. Feb 13 20:09:53.933036 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:53.891917 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:09:53.933232 ignition[760]: parsed url from cmdline: "" Feb 13 20:09:53.945864 unknown[760]: fetched base config from "system" Feb 13 20:09:53.933239 ignition[760]: no config URL provided Feb 13 20:09:53.945879 unknown[760]: fetched base config from "system" Feb 13 20:09:53.933249 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:09:53.945899 unknown[760]: fetched user config from "gcp" Feb 13 20:09:53.933264 ignition[760]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:09:53.953746 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:09:53.933296 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 20:09:53.977333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:09:53.938804 ignition[760]: GET result: OK Feb 13 20:09:54.021409 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:09:53.938943 ignition[760]: parsing config with SHA512: b1a8d88b27fbecbf1418572c0db909f058e010804d6103ad56b73b3e4dc3769fafbe1c2a25600b79710de07780fcd039aae1fd53b80aba2fe8f3c7698dba24f0 Feb 13 20:09:54.047335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:09:53.947046 ignition[760]: fetch: fetch complete Feb 13 20:09:54.096233 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:09:53.947058 ignition[760]: fetch: fetch passed Feb 13 20:09:54.121251 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:09:53.947167 ignition[760]: Ignition finished successfully Feb 13 20:09:54.141308 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:09:54.018444 ignition[767]: Ignition 2.19.0 Feb 13 20:09:54.158291 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:09:54.018454 ignition[767]: Stage: kargs Feb 13 20:09:54.172292 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:09:54.018696 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:54.188307 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:09:54.018710 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:54.213327 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:09:54.019865 ignition[767]: kargs: kargs passed Feb 13 20:09:54.019928 ignition[767]: Ignition finished successfully Feb 13 20:09:54.093051 ignition[773]: Ignition 2.19.0 Feb 13 20:09:54.093061 ignition[773]: Stage: disks Feb 13 20:09:54.093542 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:54.093558 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:54.094592 ignition[773]: disks: disks passed Feb 13 20:09:54.094649 ignition[773]: Ignition finished successfully Feb 13 20:09:54.277844 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:09:54.422412 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:09:54.449270 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:09:54.603242 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:09:54.604306 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:09:54.605295 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:09:54.625239 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:09:54.654400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:09:54.663944 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:09:54.722307 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (790) Feb 13 20:09:54.722371 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:54.722404 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:09:54.722455 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:09:54.664042 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:09:54.764279 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:09:54.764333 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:09:54.664110 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:09:54.748586 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:09:54.772586 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:09:54.796348 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:09:54.928148 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:09:54.937239 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:09:54.948229 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:09:54.958358 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:09:55.109948 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:09:55.127267 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:09:55.131416 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:09:55.169592 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:09:55.185263 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:55.205051 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:09:55.220472 ignition[906]: INFO : Ignition 2.19.0 Feb 13 20:09:55.220472 ignition[906]: INFO : Stage: mount Feb 13 20:09:55.246276 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:55.246276 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:55.246276 ignition[906]: INFO : mount: mount passed Feb 13 20:09:55.246276 ignition[906]: INFO : Ignition finished successfully Feb 13 20:09:55.225779 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:09:55.239449 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:09:55.363274 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (917) Feb 13 20:09:55.363329 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:09:55.363354 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:09:55.363379 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:09:55.363414 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:09:55.363442 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:09:55.287396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:09:55.363266 systemd-networkd[754]: eth0: Gained IPv6LL Feb 13 20:09:55.365329 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:09:55.425445 ignition[933]: INFO : Ignition 2.19.0 Feb 13 20:09:55.425445 ignition[933]: INFO : Stage: files Feb 13 20:09:55.440251 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:55.440251 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:55.440251 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:09:55.440251 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:09:55.440251 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:09:55.440251 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:09:55.440251 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:09:55.440251 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:09:55.438513 unknown[933]: wrote ssh authorized keys file for user: core Feb 13 20:09:55.542218 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:09:55.542218 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:09:55.602331 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:09:55.942491 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:09:55.959250 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 20:09:56.247699 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 20:09:56.770116 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:09:56.789267 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:09:56.789267 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:09:56.789267 ignition[933]: INFO : files: files passed Feb 13 20:09:56.789267 ignition[933]: INFO : Ignition finished successfully Feb 13 20:09:56.776770 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:09:56.805506 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:09:56.860335 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:09:56.865833 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:09:57.024382 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:09:57.024382 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:09:56.865971 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:09:57.074306 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:09:56.943032 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:09:56.955403 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:09:56.979307 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:09:57.042859 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:09:57.043000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:09:57.065223 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:09:57.084500 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:09:57.108563 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:09:57.115364 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:09:57.182393 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:09:57.207342 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:09:57.246255 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:09:57.259675 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:09:57.270715 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:09:57.290755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:09:57.290984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:09:57.325675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:09:57.334712 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:09:57.352700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:09:57.369723 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:09:57.386752 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:09:57.424674 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:09:57.451474 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:09:57.473467 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:09:57.490500 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:09:57.507476 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:09:57.522386 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:09:57.522860 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:09:57.549592 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:09:57.550043 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:09:57.568684 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:09:57.568909 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:09:57.588707 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:09:57.588940 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:09:57.627750 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:09:57.628036 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:09:57.636789 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:09:57.637011 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:09:57.663682 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:09:57.705225 ignition[987]: INFO : Ignition 2.19.0 Feb 13 20:09:57.705225 ignition[987]: INFO : Stage: umount Feb 13 20:09:57.754281 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:09:57.754281 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:09:57.754281 ignition[987]: INFO : umount: umount passed Feb 13 20:09:57.754281 ignition[987]: INFO : Ignition finished successfully Feb 13 20:09:57.726630 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:09:57.735269 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:09:57.735624 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:09:57.747574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:09:57.747887 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:09:57.802865 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:09:57.804296 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:09:57.804443 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:09:57.819286 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:09:57.819423 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:09:57.840954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:09:57.841121 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:09:57.850914 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:09:57.850996 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:09:57.876538 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:09:57.876628 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:09:57.886578 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:09:57.886658 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:09:57.902615 systemd[1]: Stopped target network.target - Network. Feb 13 20:09:57.919517 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:09:57.919609 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:09:57.935573 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:09:57.953545 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:09:57.957211 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:09:57.979277 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:09:57.995285 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:09:58.013358 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:09:58.013458 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:09:58.031365 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:09:58.031472 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:09:58.049352 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:09:58.049530 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:09:58.068389 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:09:58.068511 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:09:58.087408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:09:58.087588 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:09:58.105651 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:09:58.110184 systemd-networkd[754]: eth0: DHCPv6 lease lost Feb 13 20:09:58.124504 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:09:58.143035 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:09:58.143233 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:09:58.172512 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:09:58.172832 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:09:58.181352 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:09:58.181418 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:09:58.205247 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:09:58.240244 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:09:58.240520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:09:58.266521 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:09:58.266608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:09:58.284575 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:09:58.284666 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:09:58.292568 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:09:58.292648 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:09:58.309774 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:09:58.328027 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:09:58.328267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:09:58.361537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:09:58.361685 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:09:58.383538 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:09:58.770276 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:09:58.383608 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:09:58.393547 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:09:58.393633 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:09:58.429507 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:09:58.429629 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:09:58.473283 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:09:58.473554 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:09:58.509310 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:09:58.531243 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:09:58.531389 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:09:58.549370 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:09:58.549488 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:09:58.570356 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:09:58.570472 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:09:58.592342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:09:58.592477 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:09:58.613053 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:09:58.613250 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:09:58.633816 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:09:58.633952 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:09:58.655753 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:09:58.671307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:09:58.717746 systemd[1]: Switching root. Feb 13 20:09:59.002257 systemd-journald[183]: Journal stopped Feb 13 20:10:01.598786 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:10:01.598865 kernel: SELinux: policy capability open_perms=1 Feb 13 20:10:01.598898 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:10:01.598924 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:10:01.598951 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:10:01.598976 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:10:01.599008 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:10:01.599044 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:10:01.599086 kernel: audit: type=1403 audit(1739477399.380:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:10:01.599132 systemd[1]: Successfully loaded SELinux policy in 97.502ms. Feb 13 20:10:01.599164 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.327ms. Feb 13 20:10:01.599189 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:10:01.599216 systemd[1]: Detected virtualization google. Feb 13 20:10:01.599246 systemd[1]: Detected architecture x86-64. Feb 13 20:10:01.599289 systemd[1]: Detected first boot. Feb 13 20:10:01.599315 systemd[1]: Initializing machine ID from random generator. Feb 13 20:10:01.599346 zram_generator::config[1028]: No configuration found. Feb 13 20:10:01.599381 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:10:01.599412 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:10:01.599450 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:10:01.599479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:10:01.599514 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:10:01.599545 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:10:01.599589 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:10:01.599622 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:10:01.599654 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:10:01.599692 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:10:01.599724 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:10:01.599757 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:10:01.599786 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:10:01.599820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:10:01.599850 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:10:01.599882 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:10:01.599915 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:10:01.599952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:10:01.599980 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:10:01.600009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:10:01.600049 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:10:01.600115 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:10:01.600151 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:10:01.600194 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:10:01.600232 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:10:01.600263 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:10:01.600304 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:10:01.600330 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:10:01.600355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:10:01.600383 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:10:01.600408 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:10:01.600434 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:10:01.600460 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:10:01.600498 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:10:01.600522 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:10:01.600544 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:10:01.600575 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:10:01.600604 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:10:01.600639 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:10:01.600663 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:10:01.600685 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:10:01.600709 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:10:01.600733 systemd[1]: Reached target machines.target - Containers. Feb 13 20:10:01.600760 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:10:01.600789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:10:01.600819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:10:01.600856 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:10:01.600889 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:10:01.600922 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:10:01.600954 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:10:01.600985 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:10:01.601016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:10:01.601048 kernel: ACPI: bus type drm_connector registered Feb 13 20:10:01.601102 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:10:01.601154 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:10:01.601186 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:10:01.601217 kernel: fuse: init (API version 7.39) Feb 13 20:10:01.601248 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:10:01.601277 kernel: loop: module loaded Feb 13 20:10:01.601305 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:10:01.601334 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:10:01.601365 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:10:01.601445 systemd-journald[1115]: Collecting audit messages is disabled. Feb 13 20:10:01.601511 systemd-journald[1115]: Journal started Feb 13 20:10:01.601588 systemd-journald[1115]: Runtime Journal (/run/log/journal/0fa2b2b042b941859139542881754dfa) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:10:01.605115 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:10:00.373207 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:10:00.400156 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:10:00.400796 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:10:01.632172 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:10:01.664193 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:10:01.681115 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:10:01.687777 systemd[1]: Stopped verity-setup.service. Feb 13 20:10:01.711118 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:10:01.723117 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:10:01.734980 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:10:01.745580 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:10:01.756555 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:10:01.766552 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:10:01.776520 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:10:01.786534 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:10:01.797797 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:10:01.809830 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:10:01.821797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:10:01.822084 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:10:01.833747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:10:01.834042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:10:01.845729 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:10:01.846022 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:10:01.856699 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:10:01.856972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:10:01.868743 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:10:01.869026 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:10:01.879721 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:10:01.880008 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:10:01.890753 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:10:01.901733 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:10:01.913765 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:10:01.925694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:10:01.953457 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:10:01.970243 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:10:01.994265 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:10:02.005326 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:10:02.005651 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:10:02.019064 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:10:02.035412 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:10:02.058265 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:10:02.068480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:10:02.079892 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:10:02.096243 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:10:02.107751 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:10:02.127601 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:10:02.137299 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:10:02.146428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:10:02.164346 systemd-journald[1115]: Time spent on flushing to /var/log/journal/0fa2b2b042b941859139542881754dfa is 77.260ms for 930 entries. Feb 13 20:10:02.164346 systemd-journald[1115]: System Journal (/var/log/journal/0fa2b2b042b941859139542881754dfa) is 8.0M, max 584.8M, 576.8M free. Feb 13 20:10:02.307067 systemd-journald[1115]: Received client request to flush runtime journal. Feb 13 20:10:02.307212 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 20:10:02.172712 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:10:02.191769 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:10:02.217384 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:10:02.233725 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:10:02.249866 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:10:02.261865 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:10:02.273739 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:10:02.295871 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:10:02.315441 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:10:02.327054 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:10:02.361597 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:10:02.368728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:10:02.384178 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Feb 13 20:10:02.389006 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Feb 13 20:10:02.393489 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:10:02.403614 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:10:02.417991 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:10:02.419343 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:10:02.442559 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 20:10:02.446530 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:10:02.547170 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 20:10:02.593978 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:10:02.617415 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:10:02.677564 kernel: loop3: detected capacity change from 0 to 54824 Feb 13 20:10:02.688689 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 20:10:02.690393 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 20:10:02.699843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:10:02.768234 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 20:10:02.819161 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 20:10:02.886299 kernel: loop6: detected capacity change from 0 to 210664 Feb 13 20:10:02.926125 kernel: loop7: detected capacity change from 0 to 54824 Feb 13 20:10:02.963855 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 20:10:02.964953 (sd-merge)[1173]: Merged extensions into '/usr'. Feb 13 20:10:02.980218 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:10:02.980263 systemd[1]: Reloading... Feb 13 20:10:03.145162 zram_generator::config[1195]: No configuration found. Feb 13 20:10:03.397722 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:10:03.466358 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:10:03.591578 systemd[1]: Reloading finished in 610 ms. Feb 13 20:10:03.623274 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:10:03.634001 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:10:03.661398 systemd[1]: Starting ensure-sysext.service... Feb 13 20:10:03.682363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:10:03.703196 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:10:03.703234 systemd[1]: Reloading... Feb 13 20:10:03.751767 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:10:03.753616 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:10:03.756016 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:10:03.756616 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 20:10:03.756733 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 20:10:03.767309 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:10:03.767514 systemd-tmpfiles[1240]: Skipping /boot Feb 13 20:10:03.819436 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:10:03.821982 systemd-tmpfiles[1240]: Skipping /boot Feb 13 20:10:03.904110 zram_generator::config[1266]: No configuration found. Feb 13 20:10:04.080024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:10:04.148354 systemd[1]: Reloading finished in 444 ms. Feb 13 20:10:04.173549 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:10:04.192047 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:10:04.220510 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:10:04.245446 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:10:04.272901 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:10:04.299538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:10:04.319826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:10:04.327624 augenrules[1328]: No rules Feb 13 20:10:04.339905 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:10:04.352404 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:10:04.362978 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:10:04.391523 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Feb 13 20:10:04.396736 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:10:04.415094 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:10:04.435426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:10:04.435952 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:10:04.445908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:10:04.467431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:10:04.490389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:10:04.500391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:10:04.500937 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:10:04.510371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:10:04.524176 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:10:04.536490 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:10:04.548368 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:10:04.550488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:10:04.562193 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:10:04.575323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:10:04.576199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:10:04.589831 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:10:04.590552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:10:04.617138 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:10:04.692929 systemd[1]: Finished ensure-sysext.service. Feb 13 20:10:04.705207 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:10:04.705798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:10:04.708225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:10:04.720340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:10:04.736365 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:10:04.759349 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:10:04.775809 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:10:04.795323 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 20:10:04.804587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:10:04.815376 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:10:04.826292 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:10:04.836289 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:10:04.836352 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:10:04.837864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:10:04.838795 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:10:04.850866 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:10:04.852167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:10:04.863887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:10:04.870340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:10:04.881995 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:10:04.882490 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:10:04.901098 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 20:10:04.940522 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 20:10:04.940576 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:10:04.936517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:10:04.936630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:10:04.957151 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 20:10:04.952996 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 20:10:04.979114 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Feb 13 20:10:04.981377 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 20:10:04.984491 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 20:10:04.982242 systemd-resolved[1325]: Positive Trust Anchors: Feb 13 20:10:04.982283 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:10:04.982374 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:10:05.011512 systemd-resolved[1325]: Defaulting to hostname 'linux'. Feb 13 20:10:05.023833 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:10:05.043213 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1349) Feb 13 20:10:05.046407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:10:05.117953 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 20:10:05.166228 systemd-networkd[1385]: lo: Link UP Feb 13 20:10:05.166251 systemd-networkd[1385]: lo: Gained carrier Feb 13 20:10:05.173166 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:10:05.173510 systemd-networkd[1385]: Enumeration completed Feb 13 20:10:05.173692 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:10:05.174631 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:10:05.174652 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:10:05.175546 systemd-networkd[1385]: eth0: Link UP Feb 13 20:10:05.175566 systemd-networkd[1385]: eth0: Gained carrier Feb 13 20:10:05.175594 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:10:05.183395 systemd[1]: Reached target network.target - Network. Feb 13 20:10:05.187440 systemd-networkd[1385]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:10:05.199364 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:10:05.237762 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:10:05.247135 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:10:05.262938 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:10:05.287152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:10:05.309086 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:10:05.321806 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:10:05.331607 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:10:05.350809 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:10:05.389956 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:10:05.390658 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:10:05.398253 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:10:05.414229 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:10:05.431055 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:10:05.442771 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:10:05.453426 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:10:05.465436 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:10:05.477602 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:10:05.487527 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:10:05.499323 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:10:05.510290 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:10:05.510364 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:10:05.519296 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:10:05.530241 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:10:05.542352 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:10:05.561451 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:10:05.573538 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:10:05.585589 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:10:05.596407 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:10:05.606281 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:10:05.615354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:10:05.615445 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:10:05.622257 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:10:05.645253 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:10:05.663297 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:10:05.665215 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:10:05.685792 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:10:05.695301 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:10:05.718818 jq[1430]: false Feb 13 20:10:05.721537 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:10:05.737346 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 20:10:05.750285 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:10:05.769408 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:10:05.787358 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:10:05.810891 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:10:05.821941 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 20:10:05.823991 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:10:05.833388 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:10:05.837664 coreos-metadata[1428]: Feb 13 20:10:05.837 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 20:10:05.839785 coreos-metadata[1428]: Feb 13 20:10:05.839 INFO Fetch successful Feb 13 20:10:05.839785 coreos-metadata[1428]: Feb 13 20:10:05.839 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 20:10:05.845421 coreos-metadata[1428]: Feb 13 20:10:05.845 INFO Fetch successful Feb 13 20:10:05.845421 coreos-metadata[1428]: Feb 13 20:10:05.845 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 20:10:05.846301 coreos-metadata[1428]: Feb 13 20:10:05.846 INFO Fetch successful Feb 13 20:10:05.847309 coreos-metadata[1428]: Feb 13 20:10:05.846 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 20:10:05.851122 extend-filesystems[1431]: Found loop4 Feb 13 20:10:05.873187 coreos-metadata[1428]: Feb 13 20:10:05.847 INFO Fetch successful Feb 13 20:10:05.849906 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:10:05.873458 extend-filesystems[1431]: Found loop5 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found loop6 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found loop7 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found sda Feb 13 20:10:05.873458 extend-filesystems[1431]: Found sda1 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found sda2 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found sda3 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found usr Feb 13 20:10:05.873458 extend-filesystems[1431]: Found sda4 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found sda6 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found sda7 Feb 13 20:10:05.873458 extend-filesystems[1431]: Found sda9 Feb 13 20:10:05.873458 extend-filesystems[1431]: Checking size of /dev/sda9 Feb 13 20:10:06.035328 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 20:10:06.035427 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 20:10:05.898609 dbus-daemon[1429]: [system] SELinux support is enabled Feb 13 20:10:06.036262 extend-filesystems[1431]: Resized partition /dev/sda9 Feb 13 20:10:06.058325 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1364) Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: ---------------------------------------------------- Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: corporation. Support and training for ntp-4 are Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: available at https://www.nwtime.org/support Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: ---------------------------------------------------- Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: proto: precision = 0.094 usec (-23) Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: basedate set to 2025-02-01 Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: gps base set to 2025-02-02 (week 2352) Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: Listen normally on 3 eth0 10.128.0.67:123 Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: Listen normally on 4 lo [::1]:123 Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: bind(21) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:43%2#123 Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:43%2 Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: Listening on routing socket on fd #21 for interface updates Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:10:06.058405 ntpd[1436]: 13 Feb 20:10:05 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:10:05.881417 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:10:06.061453 update_engine[1447]: I20250213 20:10:05.994989 1447 main.cc:92] Flatcar Update Engine starting Feb 13 20:10:06.061453 update_engine[1447]: I20250213 20:10:06.025960 1447 update_check_scheduler.cc:74] Next update check in 7m33s Feb 13 20:10:05.922056 ntpd[1436]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:10:06.066656 extend-filesystems[1460]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:10:06.066656 extend-filesystems[1460]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 20:10:06.066656 extend-filesystems[1460]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 20:10:06.066656 extend-filesystems[1460]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 20:10:05.882526 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:10:06.162425 jq[1448]: true Feb 13 20:10:05.922126 ntpd[1436]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:10:06.162782 extend-filesystems[1431]: Resized filesystem in /dev/sda9 Feb 13 20:10:05.883592 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:10:05.922147 ntpd[1436]: ---------------------------------------------------- Feb 13 20:10:05.883941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:10:05.922175 ntpd[1436]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:10:06.174632 jq[1463]: true Feb 13 20:10:05.913519 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:10:05.922193 ntpd[1436]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:10:05.938735 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:10:05.922209 ntpd[1436]: corporation. Support and training for ntp-4 are Feb 13 20:10:05.940194 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:10:05.922224 ntpd[1436]: available at https://www.nwtime.org/support Feb 13 20:10:06.014766 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:10:05.922240 ntpd[1436]: ---------------------------------------------------- Feb 13 20:10:06.040509 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:10:05.937318 dbus-daemon[1429]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1385 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 20:10:06.040545 systemd-logind[1446]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 20:10:05.939613 ntpd[1436]: proto: precision = 0.094 usec (-23) Feb 13 20:10:06.040581 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:10:05.940047 ntpd[1436]: basedate set to 2025-02-01 Feb 13 20:10:06.041124 systemd-logind[1446]: New seat seat0. Feb 13 20:10:05.943980 ntpd[1436]: gps base set to 2025-02-02 (week 2352) Feb 13 20:10:06.070396 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:10:05.953060 ntpd[1436]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:10:06.070453 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:10:05.953166 ntpd[1436]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:10:06.082335 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:10:05.953447 ntpd[1436]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:10:06.082384 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:10:05.953507 ntpd[1436]: Listen normally on 3 eth0 10.128.0.67:123 Feb 13 20:10:06.094676 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:10:05.953571 ntpd[1436]: Listen normally on 4 lo [::1]:123 Feb 13 20:10:06.104920 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:10:05.953646 ntpd[1436]: bind(21) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:10:06.105279 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:10:05.953681 ntpd[1436]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:43%2#123 Feb 13 20:10:05.953707 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:43%2 Feb 13 20:10:05.953753 ntpd[1436]: Listening on routing socket on fd #21 for interface updates Feb 13 20:10:05.956314 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:10:05.956354 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:10:06.017964 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:10:06.211949 (ntainerd)[1476]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:10:06.226179 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:10:06.238839 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:10:06.249439 tar[1459]: linux-amd64/helm Feb 13 20:10:06.256726 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:10:06.268557 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 20:10:06.287571 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:10:06.326112 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:10:06.328496 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:10:06.359633 systemd[1]: Starting sshkeys.service... Feb 13 20:10:06.444723 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:10:06.473530 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:10:06.536728 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 20:10:06.544139 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 20:10:06.546490 dbus-daemon[1429]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1498 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 20:10:06.573957 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 20:10:06.629194 systemd-networkd[1385]: eth0: Gained IPv6LL Feb 13 20:10:06.640693 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:10:06.654946 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:10:06.656602 coreos-metadata[1502]: Feb 13 20:10:06.656 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 20:10:06.660884 coreos-metadata[1502]: Feb 13 20:10:06.659 INFO Fetch failed with 404: resource not found Feb 13 20:10:06.660884 coreos-metadata[1502]: Feb 13 20:10:06.660 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 20:10:06.661667 coreos-metadata[1502]: Feb 13 20:10:06.660 INFO Fetch successful Feb 13 20:10:06.663215 coreos-metadata[1502]: Feb 13 20:10:06.662 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 20:10:06.669991 coreos-metadata[1502]: Feb 13 20:10:06.668 INFO Fetch failed with 404: resource not found Feb 13 20:10:06.669991 coreos-metadata[1502]: Feb 13 20:10:06.669 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 20:10:06.671688 coreos-metadata[1502]: Feb 13 20:10:06.671 INFO Fetch failed with 404: resource not found Feb 13 20:10:06.672584 coreos-metadata[1502]: Feb 13 20:10:06.672 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 20:10:06.677115 coreos-metadata[1502]: Feb 13 20:10:06.676 INFO Fetch successful Feb 13 20:10:06.681480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:10:06.686334 unknown[1502]: wrote ssh authorized keys file for user: core Feb 13 20:10:06.703798 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:10:06.719646 polkitd[1505]: Started polkitd version 121 Feb 13 20:10:06.724389 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 20:10:06.753896 init.sh[1517]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 20:10:06.759503 init.sh[1517]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 20:10:06.759503 init.sh[1517]: + /usr/bin/google_instance_setup Feb 13 20:10:06.789921 polkitd[1505]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 20:10:06.790034 polkitd[1505]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 20:10:06.801607 polkitd[1505]: Finished loading, compiling and executing 2 rules Feb 13 20:10:06.807440 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 20:10:06.808631 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 20:10:06.813899 polkitd[1505]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 20:10:06.860192 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:10:06.857798 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:10:06.862029 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:10:06.881287 systemd[1]: Finished sshkeys.service. Feb 13 20:10:06.918488 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:10:06.940029 systemd-hostnamed[1498]: Hostname set to (transient) Feb 13 20:10:06.942714 systemd-resolved[1325]: System hostname changed to 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal'. Feb 13 20:10:07.176191 containerd[1476]: time="2025-02-13T20:10:07.175473920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:10:07.349525 containerd[1476]: time="2025-02-13T20:10:07.347981739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.357396812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.358639959Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.358690431Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.358926617Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.358956689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.359065528Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.359117041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.359426103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.359455064Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.359481017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361279 containerd[1476]: time="2025-02-13T20:10:07.359499896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361907 containerd[1476]: time="2025-02-13T20:10:07.359631676Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:10:07.361907 containerd[1476]: time="2025-02-13T20:10:07.359973292Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:10:07.362292 containerd[1476]: time="2025-02-13T20:10:07.362252540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:10:07.364771 containerd[1476]: time="2025-02-13T20:10:07.364122579Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:10:07.364771 containerd[1476]: time="2025-02-13T20:10:07.364311042Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:10:07.364771 containerd[1476]: time="2025-02-13T20:10:07.364392359Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:10:07.373708 containerd[1476]: time="2025-02-13T20:10:07.373657954Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.377279168Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.377343709Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.377374995Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.377400768Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.377630956Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378111199Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378344017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378377884Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378400675Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378426655Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378453902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378478019Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378503139Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:10:07.378759 containerd[1476]: time="2025-02-13T20:10:07.378631191Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:10:07.379517 containerd[1476]: time="2025-02-13T20:10:07.378690485Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.378720609Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379632213Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379696721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379722638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379764679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379805222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379850554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379893398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379936959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380000 containerd[1476]: time="2025-02-13T20:10:07.379960717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380819 containerd[1476]: time="2025-02-13T20:10:07.379984232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.380819 containerd[1476]: time="2025-02-13T20:10:07.380220093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381007334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381048207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381121620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381178239Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381243967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381272438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381295900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381431448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381464925Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:10:07.381569 containerd[1476]: time="2025-02-13T20:10:07.381505244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:10:07.382419 containerd[1476]: time="2025-02-13T20:10:07.381540514Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:10:07.382419 containerd[1476]: time="2025-02-13T20:10:07.382097812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.382419 containerd[1476]: time="2025-02-13T20:10:07.382143293Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:10:07.382419 containerd[1476]: time="2025-02-13T20:10:07.382188535Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:10:07.382419 containerd[1476]: time="2025-02-13T20:10:07.382206481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:10:07.383671 containerd[1476]: time="2025-02-13T20:10:07.383075900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:10:07.383671 containerd[1476]: time="2025-02-13T20:10:07.383416690Z" level=info msg="Connect containerd service" Feb 13 20:10:07.383671 containerd[1476]: time="2025-02-13T20:10:07.383517997Z" level=info msg="using legacy CRI server" Feb 13 20:10:07.383671 containerd[1476]: time="2025-02-13T20:10:07.383533820Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:10:07.384981 containerd[1476]: time="2025-02-13T20:10:07.384664783Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:10:07.386477 containerd[1476]: time="2025-02-13T20:10:07.386326662Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:10:07.387095 containerd[1476]: time="2025-02-13T20:10:07.386999665Z" level=info msg="Start subscribing containerd event" Feb 13 20:10:07.387384 containerd[1476]: time="2025-02-13T20:10:07.387224031Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:10:07.387384 containerd[1476]: time="2025-02-13T20:10:07.387224294Z" level=info msg="Start recovering state" Feb 13 20:10:07.387384 containerd[1476]: time="2025-02-13T20:10:07.387381005Z" level=info msg="Start event monitor" Feb 13 20:10:07.387580 containerd[1476]: time="2025-02-13T20:10:07.387401617Z" level=info msg="Start snapshots syncer" Feb 13 20:10:07.389585 containerd[1476]: time="2025-02-13T20:10:07.387420196Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:10:07.389585 containerd[1476]: time="2025-02-13T20:10:07.388805953Z" level=info msg="Start streaming server" Feb 13 20:10:07.389885 containerd[1476]: time="2025-02-13T20:10:07.389852714Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:10:07.390151 containerd[1476]: time="2025-02-13T20:10:07.390113894Z" level=info msg="containerd successfully booted in 0.218864s" Feb 13 20:10:07.390923 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:10:07.757407 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:10:07.872029 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:10:07.890538 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:10:07.909524 systemd[1]: Started sshd@0-10.128.0.67:22-139.178.89.65:40250.service - OpenSSH per-connection server daemon (139.178.89.65:40250). Feb 13 20:10:07.934777 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:10:07.935171 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:10:07.957629 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:10:08.020560 tar[1459]: linux-amd64/LICENSE Feb 13 20:10:08.020560 tar[1459]: linux-amd64/README.md Feb 13 20:10:08.030244 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:10:08.055827 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:10:08.074350 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:10:08.086590 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:10:08.098413 instance-setup[1522]: INFO Running google_set_multiqueue. Feb 13 20:10:08.098652 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:10:08.120923 instance-setup[1522]: INFO Set channels for eth0 to 2. Feb 13 20:10:08.125718 instance-setup[1522]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 20:10:08.128149 instance-setup[1522]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 20:10:08.128500 instance-setup[1522]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 20:10:08.131147 instance-setup[1522]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 20:10:08.131253 instance-setup[1522]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 20:10:08.134300 instance-setup[1522]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 20:10:08.134376 instance-setup[1522]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 20:10:08.136466 instance-setup[1522]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 20:10:08.147180 instance-setup[1522]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 20:10:08.152214 instance-setup[1522]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 20:10:08.154595 instance-setup[1522]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 20:10:08.154672 instance-setup[1522]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 20:10:08.184627 init.sh[1517]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 20:10:08.342182 sshd[1554]: Accepted publickey for core from 139.178.89.65 port 40250 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:10:08.345682 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:08.369368 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:10:08.389267 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:10:08.408160 systemd-logind[1446]: New session 1 of user core. Feb 13 20:10:08.435651 startup-script[1594]: INFO Starting startup scripts. Feb 13 20:10:08.436997 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:10:08.447144 startup-script[1594]: INFO No startup scripts found in metadata. Feb 13 20:10:08.447264 startup-script[1594]: INFO Finished running startup scripts. Feb 13 20:10:08.463713 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:10:08.489111 init.sh[1517]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 20:10:08.489111 init.sh[1517]: + daemon_pids=() Feb 13 20:10:08.489111 init.sh[1517]: + for d in accounts clock_skew network Feb 13 20:10:08.489111 init.sh[1517]: + daemon_pids+=($!) Feb 13 20:10:08.489111 init.sh[1517]: + for d in accounts clock_skew network Feb 13 20:10:08.489111 init.sh[1517]: + daemon_pids+=($!) Feb 13 20:10:08.489111 init.sh[1517]: + for d in accounts clock_skew network Feb 13 20:10:08.489111 init.sh[1517]: + daemon_pids+=($!) Feb 13 20:10:08.489111 init.sh[1517]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 20:10:08.489111 init.sh[1517]: + /usr/bin/systemd-notify --ready Feb 13 20:10:08.489878 init.sh[1600]: + /usr/bin/google_accounts_daemon Feb 13 20:10:08.491046 init.sh[1601]: + /usr/bin/google_clock_skew_daemon Feb 13 20:10:08.491408 init.sh[1602]: + /usr/bin/google_network_daemon Feb 13 20:10:08.509367 (systemd)[1599]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:10:08.515287 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 20:10:08.531275 init.sh[1517]: + wait -n 1600 1601 1602 Feb 13 20:10:08.903965 systemd[1599]: Queued start job for default target default.target. Feb 13 20:10:08.909940 systemd[1599]: Created slice app.slice - User Application Slice. Feb 13 20:10:08.910007 systemd[1599]: Reached target paths.target - Paths. Feb 13 20:10:08.910034 systemd[1599]: Reached target timers.target - Timers. Feb 13 20:10:08.914317 systemd[1599]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:10:08.922836 ntpd[1436]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:43%2]:123 Feb 13 20:10:08.923519 ntpd[1436]: 13 Feb 20:10:08 ntpd[1436]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:43%2]:123 Feb 13 20:10:08.957421 systemd[1599]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:10:08.957658 systemd[1599]: Reached target sockets.target - Sockets. Feb 13 20:10:08.957700 systemd[1599]: Reached target basic.target - Basic System. Feb 13 20:10:08.957791 systemd[1599]: Reached target default.target - Main User Target. Feb 13 20:10:08.957852 systemd[1599]: Startup finished in 424ms. Feb 13 20:10:08.958067 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:10:08.975769 google-networking[1602]: INFO Starting Google Networking daemon. Feb 13 20:10:08.977426 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:10:09.112051 google-clock-skew[1601]: INFO Starting Google Clock Skew daemon. Feb 13 20:10:09.125561 google-clock-skew[1601]: INFO Clock drift token has changed: 0. Feb 13 20:10:09.219322 groupadd[1621]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 20:10:09.229841 groupadd[1621]: group added to /etc/gshadow: name=google-sudoers Feb 13 20:10:09.243307 systemd[1]: Started sshd@1-10.128.0.67:22-139.178.89.65:33544.service - OpenSSH per-connection server daemon (139.178.89.65:33544). Feb 13 20:10:09.336617 groupadd[1621]: new group: name=google-sudoers, GID=1000 Feb 13 20:10:09.348371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:10:09.361897 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:10:09.374024 systemd[1]: Startup finished in 1.129s (kernel) + 9.579s (initrd) + 10.080s (userspace) = 20.788s. Feb 13 20:10:09.379533 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:10:09.402399 google-accounts[1600]: INFO Starting Google Accounts daemon. Feb 13 20:10:09.427272 google-accounts[1600]: WARNING OS Login not installed. Feb 13 20:10:09.430359 google-accounts[1600]: INFO Creating a new user account for 0. Feb 13 20:10:09.436880 init.sh[1642]: useradd: invalid user name '0': use --badname to ignore Feb 13 20:10:09.437281 google-accounts[1600]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 20:10:09.588460 sshd[1623]: Accepted publickey for core from 139.178.89.65 port 33544 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:10:09.591055 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:09.599200 systemd-logind[1446]: New session 2 of user core. Feb 13 20:10:09.606299 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:10:09.810022 sshd[1623]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:09.818243 systemd[1]: sshd@1-10.128.0.67:22-139.178.89.65:33544.service: Deactivated successfully. Feb 13 20:10:09.821649 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:10:09.823343 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:10:09.825216 systemd-logind[1446]: Removed session 2. Feb 13 20:10:10.000706 google-clock-skew[1601]: INFO Synced system time with hardware clock. Feb 13 20:10:10.001261 systemd-resolved[1325]: Clock change detected. Flushing caches. Feb 13 20:10:10.004123 systemd[1]: Started sshd@2-10.128.0.67:22-139.178.89.65:33546.service - OpenSSH per-connection server daemon (139.178.89.65:33546). Feb 13 20:10:10.304568 sshd[1653]: Accepted publickey for core from 139.178.89.65 port 33546 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:10:10.307488 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:10.317251 systemd-logind[1446]: New session 3 of user core. Feb 13 20:10:10.322411 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:10:10.515856 sshd[1653]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:10.522736 systemd[1]: sshd@2-10.128.0.67:22-139.178.89.65:33546.service: Deactivated successfully. Feb 13 20:10:10.526482 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:10:10.529449 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:10:10.531767 systemd-logind[1446]: Removed session 3. Feb 13 20:10:10.538947 kubelet[1635]: E0213 20:10:10.538883 1635 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:10:10.542631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:10:10.542932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:10:10.543549 systemd[1]: kubelet.service: Consumed 1.331s CPU time. Feb 13 20:10:10.573610 systemd[1]: Started sshd@3-10.128.0.67:22-139.178.89.65:33558.service - OpenSSH per-connection server daemon (139.178.89.65:33558). Feb 13 20:10:10.869240 sshd[1663]: Accepted publickey for core from 139.178.89.65 port 33558 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:10:10.871493 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:10.878293 systemd-logind[1446]: New session 4 of user core. Feb 13 20:10:10.888413 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:10:11.086063 sshd[1663]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:11.091123 systemd[1]: sshd@3-10.128.0.67:22-139.178.89.65:33558.service: Deactivated successfully. Feb 13 20:10:11.093853 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:10:11.096232 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:10:11.097865 systemd-logind[1446]: Removed session 4. Feb 13 20:10:11.145696 systemd[1]: Started sshd@4-10.128.0.67:22-139.178.89.65:33564.service - OpenSSH per-connection server daemon (139.178.89.65:33564). Feb 13 20:10:11.428997 sshd[1670]: Accepted publickey for core from 139.178.89.65 port 33564 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:10:11.431279 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:11.438807 systemd-logind[1446]: New session 5 of user core. Feb 13 20:10:11.445416 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:10:11.625728 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:10:11.626346 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:10:11.644776 sudo[1673]: pam_unix(sudo:session): session closed for user root Feb 13 20:10:11.687897 sshd[1670]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:11.693867 systemd[1]: sshd@4-10.128.0.67:22-139.178.89.65:33564.service: Deactivated successfully. Feb 13 20:10:11.696747 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:10:11.699123 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:10:11.700757 systemd-logind[1446]: Removed session 5. Feb 13 20:10:11.750734 systemd[1]: Started sshd@5-10.128.0.67:22-139.178.89.65:33572.service - OpenSSH per-connection server daemon (139.178.89.65:33572). Feb 13 20:10:12.040361 sshd[1678]: Accepted publickey for core from 139.178.89.65 port 33572 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:10:12.042706 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:12.049816 systemd-logind[1446]: New session 6 of user core. Feb 13 20:10:12.056475 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:10:12.222706 sudo[1682]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:10:12.223333 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:10:12.228846 sudo[1682]: pam_unix(sudo:session): session closed for user root Feb 13 20:10:12.243895 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:10:12.244504 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:10:12.261573 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:10:12.267352 auditctl[1685]: No rules Feb 13 20:10:12.267962 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:10:12.268292 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:10:12.275721 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:10:12.325752 augenrules[1703]: No rules Feb 13 20:10:12.326751 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:10:12.328981 sudo[1681]: pam_unix(sudo:session): session closed for user root Feb 13 20:10:12.376445 sshd[1678]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:12.382000 systemd[1]: sshd@5-10.128.0.67:22-139.178.89.65:33572.service: Deactivated successfully. Feb 13 20:10:12.384987 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:10:12.387544 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:10:12.389581 systemd-logind[1446]: Removed session 6. Feb 13 20:10:12.434283 systemd[1]: Started sshd@6-10.128.0.67:22-139.178.89.65:33586.service - OpenSSH per-connection server daemon (139.178.89.65:33586). Feb 13 20:10:12.719720 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 33586 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:10:12.721896 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:12.727732 systemd-logind[1446]: New session 7 of user core. Feb 13 20:10:12.735358 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:10:12.900486 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:10:12.901083 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:10:13.370834 (dockerd)[1730]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:10:13.371275 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:10:13.819168 dockerd[1730]: time="2025-02-13T20:10:13.818964528Z" level=info msg="Starting up" Feb 13 20:10:13.948776 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport123424600-merged.mount: Deactivated successfully. Feb 13 20:10:14.042641 dockerd[1730]: time="2025-02-13T20:10:14.042359495Z" level=info msg="Loading containers: start." Feb 13 20:10:14.210161 kernel: Initializing XFRM netlink socket Feb 13 20:10:14.327250 systemd-networkd[1385]: docker0: Link UP Feb 13 20:10:14.352392 dockerd[1730]: time="2025-02-13T20:10:14.352322632Z" level=info msg="Loading containers: done." Feb 13 20:10:14.375656 dockerd[1730]: time="2025-02-13T20:10:14.375585748Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:10:14.375974 dockerd[1730]: time="2025-02-13T20:10:14.375730344Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:10:14.375974 dockerd[1730]: time="2025-02-13T20:10:14.375897490Z" level=info msg="Daemon has completed initialization" Feb 13 20:10:14.422249 dockerd[1730]: time="2025-02-13T20:10:14.421694225Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:10:14.422017 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:10:15.471915 containerd[1476]: time="2025-02-13T20:10:15.471839162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 20:10:16.006746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount54953314.mount: Deactivated successfully. Feb 13 20:10:17.745078 containerd[1476]: time="2025-02-13T20:10:17.744995789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.747780 containerd[1476]: time="2025-02-13T20:10:17.747704687Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32684842" Feb 13 20:10:17.752117 containerd[1476]: time="2025-02-13T20:10:17.750928667Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.756376 containerd[1476]: time="2025-02-13T20:10:17.756321065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:17.757992 containerd[1476]: time="2025-02-13T20:10:17.757941837Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 2.286041395s" Feb 13 20:10:17.758208 containerd[1476]: time="2025-02-13T20:10:17.758178309Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 20:10:17.793410 containerd[1476]: time="2025-02-13T20:10:17.793356734Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 20:10:19.579864 containerd[1476]: time="2025-02-13T20:10:19.579793492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:19.581892 containerd[1476]: time="2025-02-13T20:10:19.581811290Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29613479" Feb 13 20:10:19.583024 containerd[1476]: time="2025-02-13T20:10:19.582911730Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:19.588765 containerd[1476]: time="2025-02-13T20:10:19.588680864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:19.591122 containerd[1476]: time="2025-02-13T20:10:19.590879418Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.797375949s" Feb 13 20:10:19.591122 containerd[1476]: time="2025-02-13T20:10:19.590942147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 20:10:19.625808 containerd[1476]: time="2025-02-13T20:10:19.625758563Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 20:10:20.793600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:10:20.803495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:10:20.874118 containerd[1476]: time="2025-02-13T20:10:20.873504271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:20.877114 containerd[1476]: time="2025-02-13T20:10:20.876132806Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17784046" Feb 13 20:10:20.895295 containerd[1476]: time="2025-02-13T20:10:20.895219676Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:21.043487 containerd[1476]: time="2025-02-13T20:10:21.043418186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:21.052217 containerd[1476]: time="2025-02-13T20:10:21.050758616Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.424879884s" Feb 13 20:10:21.052217 containerd[1476]: time="2025-02-13T20:10:21.050835126Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 20:10:21.100438 containerd[1476]: time="2025-02-13T20:10:21.099906778Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 20:10:21.115364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:10:21.132961 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:10:21.202320 kubelet[1960]: E0213 20:10:21.202245 1960 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:10:21.208069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:10:21.208388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:10:22.269428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322874315.mount: Deactivated successfully. Feb 13 20:10:22.854285 containerd[1476]: time="2025-02-13T20:10:22.854207949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:22.855741 containerd[1476]: time="2025-02-13T20:10:22.855621435Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29059753" Feb 13 20:10:22.857348 containerd[1476]: time="2025-02-13T20:10:22.857270536Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:22.860240 containerd[1476]: time="2025-02-13T20:10:22.860193808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:22.861812 containerd[1476]: time="2025-02-13T20:10:22.861131267Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.761133707s" Feb 13 20:10:22.861812 containerd[1476]: time="2025-02-13T20:10:22.861182901Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 20:10:22.892865 containerd[1476]: time="2025-02-13T20:10:22.892809930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:10:23.264916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035989744.mount: Deactivated successfully. Feb 13 20:10:24.362908 containerd[1476]: time="2025-02-13T20:10:24.362829116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.364785 containerd[1476]: time="2025-02-13T20:10:24.364694414Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Feb 13 20:10:24.366167 containerd[1476]: time="2025-02-13T20:10:24.366059856Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.370134 containerd[1476]: time="2025-02-13T20:10:24.370042823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.371814 containerd[1476]: time="2025-02-13T20:10:24.371585131Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.478711823s" Feb 13 20:10:24.371814 containerd[1476]: time="2025-02-13T20:10:24.371636521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:10:24.406858 containerd[1476]: time="2025-02-13T20:10:24.406761972Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 20:10:24.800521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204782841.mount: Deactivated successfully. Feb 13 20:10:24.808612 containerd[1476]: time="2025-02-13T20:10:24.808487042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.809920 containerd[1476]: time="2025-02-13T20:10:24.809834036Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Feb 13 20:10:24.811568 containerd[1476]: time="2025-02-13T20:10:24.811486394Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.815443 containerd[1476]: time="2025-02-13T20:10:24.815345463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:24.819034 containerd[1476]: time="2025-02-13T20:10:24.818972213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 412.145468ms" Feb 13 20:10:24.819229 containerd[1476]: time="2025-02-13T20:10:24.819037250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 20:10:24.856719 containerd[1476]: time="2025-02-13T20:10:24.856579557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 20:10:25.318059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119745617.mount: Deactivated successfully. Feb 13 20:10:27.604676 containerd[1476]: time="2025-02-13T20:10:27.604590151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:27.606570 containerd[1476]: time="2025-02-13T20:10:27.606504250Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Feb 13 20:10:27.608107 containerd[1476]: time="2025-02-13T20:10:27.608006221Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:27.612268 containerd[1476]: time="2025-02-13T20:10:27.612177050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:27.614081 containerd[1476]: time="2025-02-13T20:10:27.613891483Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.757257399s" Feb 13 20:10:27.614081 containerd[1476]: time="2025-02-13T20:10:27.613944280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 20:10:31.294183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:10:31.303682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:10:31.369870 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:10:31.370045 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:10:31.370508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:10:31.385569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:10:31.424194 systemd[1]: Reloading requested from client PID 2153 ('systemctl') (unit session-7.scope)... Feb 13 20:10:31.424223 systemd[1]: Reloading... Feb 13 20:10:31.619216 zram_generator::config[2192]: No configuration found. Feb 13 20:10:31.787317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:10:31.915877 systemd[1]: Reloading finished in 490 ms. Feb 13 20:10:32.011703 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:10:32.011870 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:10:32.012388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:10:32.020696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:10:32.536721 systemd[1]: Started sshd@7-10.128.0.67:22-194.0.234.38:24500.service - OpenSSH per-connection server daemon (194.0.234.38:24500). Feb 13 20:10:33.340389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:10:33.352872 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:10:33.416109 kubelet[2245]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:10:33.416109 kubelet[2245]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:10:33.416109 kubelet[2245]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:10:33.416909 kubelet[2245]: I0213 20:10:33.416828 2245 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:10:34.064766 kubelet[2245]: I0213 20:10:34.064701 2245 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:10:34.064766 kubelet[2245]: I0213 20:10:34.064741 2245 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:10:34.065198 kubelet[2245]: I0213 20:10:34.065159 2245 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:10:34.099250 kubelet[2245]: E0213 20:10:34.099170 2245 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.100768 kubelet[2245]: I0213 20:10:34.100586 2245 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:10:34.122848 kubelet[2245]: I0213 20:10:34.122774 2245 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:10:34.123325 kubelet[2245]: I0213 20:10:34.123271 2245 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:10:34.123646 kubelet[2245]: I0213 20:10:34.123334 2245 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:10:34.123861 kubelet[2245]: I0213 20:10:34.123671 2245 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:10:34.123861 kubelet[2245]: I0213 20:10:34.123692 2245 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:10:34.126466 kubelet[2245]: I0213 20:10:34.126232 2245 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:10:34.128750 kubelet[2245]: I0213 20:10:34.128503 2245 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:10:34.128750 kubelet[2245]: I0213 20:10:34.128554 2245 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:10:34.128750 kubelet[2245]: I0213 20:10:34.128596 2245 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:10:34.128750 kubelet[2245]: I0213 20:10:34.128619 2245 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:10:34.131427 kubelet[2245]: W0213 20:10:34.130543 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.131427 kubelet[2245]: E0213 20:10:34.130649 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.141915 kubelet[2245]: W0213 20:10:34.141829 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.141915 kubelet[2245]: E0213 20:10:34.141930 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.142215 kubelet[2245]: I0213 20:10:34.142123 2245 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:10:34.144669 kubelet[2245]: I0213 20:10:34.144612 2245 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:10:34.146632 kubelet[2245]: W0213 20:10:34.144715 2245 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:10:34.147653 kubelet[2245]: I0213 20:10:34.147619 2245 server.go:1264] "Started kubelet" Feb 13 20:10:34.153227 kubelet[2245]: I0213 20:10:34.153172 2245 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:10:34.155114 kubelet[2245]: I0213 20:10:34.155055 2245 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:10:34.155580 kubelet[2245]: I0213 20:10:34.155502 2245 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:10:34.156173 kubelet[2245]: I0213 20:10:34.156078 2245 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:10:34.158143 kubelet[2245]: I0213 20:10:34.158112 2245 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:10:34.165883 sshd[2238]: Invalid user backups from 194.0.234.38 port 24500 Feb 13 20:10:34.171140 kubelet[2245]: I0213 20:10:34.170566 2245 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:10:34.171683 kubelet[2245]: I0213 20:10:34.171654 2245 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:10:34.171784 kubelet[2245]: I0213 20:10:34.171755 2245 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:10:34.174524 kubelet[2245]: E0213 20:10:34.174461 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="200ms" Feb 13 20:10:34.175003 kubelet[2245]: W0213 20:10:34.174928 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.175146 kubelet[2245]: E0213 20:10:34.175021 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.179186 kubelet[2245]: E0213 20:10:34.178972 2245 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal.1823dd829d2c7caf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,UID:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:10:34.147568815 +0000 UTC m=+0.788678394,LastTimestamp:2025-02-13 20:10:34.147568815 +0000 UTC m=+0.788678394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,}" Feb 13 20:10:34.179834 kubelet[2245]: I0213 20:10:34.179782 2245 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:10:34.180013 kubelet[2245]: I0213 20:10:34.179958 2245 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:10:34.185114 kubelet[2245]: I0213 20:10:34.185040 2245 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:10:34.185263 kubelet[2245]: E0213 20:10:34.185220 2245 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:10:34.211144 kubelet[2245]: I0213 20:10:34.208758 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:10:34.211144 kubelet[2245]: I0213 20:10:34.211135 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:10:34.211372 kubelet[2245]: I0213 20:10:34.211167 2245 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:10:34.211372 kubelet[2245]: I0213 20:10:34.211194 2245 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:10:34.211372 kubelet[2245]: E0213 20:10:34.211291 2245 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:10:34.220272 kubelet[2245]: W0213 20:10:34.220194 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.220462 kubelet[2245]: E0213 20:10:34.220284 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:34.222042 kubelet[2245]: I0213 20:10:34.222012 2245 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:10:34.222628 kubelet[2245]: I0213 20:10:34.222267 2245 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:10:34.222628 kubelet[2245]: I0213 20:10:34.222303 2245 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:10:34.225476 kubelet[2245]: I0213 20:10:34.225452 2245 policy_none.go:49] "None policy: Start" Feb 13 20:10:34.226441 kubelet[2245]: I0213 20:10:34.226416 2245 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:10:34.226559 kubelet[2245]: I0213 20:10:34.226454 2245 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:10:34.239066 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:10:34.254957 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:10:34.260009 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:10:34.271575 kubelet[2245]: I0213 20:10:34.270799 2245 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:10:34.271744 kubelet[2245]: I0213 20:10:34.271542 2245 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:10:34.273979 kubelet[2245]: I0213 20:10:34.273799 2245 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:10:34.276173 kubelet[2245]: E0213 20:10:34.276069 2245 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" not found" Feb 13 20:10:34.277474 kubelet[2245]: I0213 20:10:34.277439 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.277979 kubelet[2245]: E0213 20:10:34.277906 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.310445 sshd[2238]: Connection closed by invalid user backups 194.0.234.38 port 24500 [preauth] Feb 13 20:10:34.311871 kubelet[2245]: I0213 20:10:34.311443 2245 topology_manager.go:215] "Topology Admit Handler" podUID="7f657a77706ff686754ca12f83833a48" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.314421 systemd[1]: sshd@7-10.128.0.67:22-194.0.234.38:24500.service: Deactivated successfully. Feb 13 20:10:34.318714 kubelet[2245]: I0213 20:10:34.318109 2245 topology_manager.go:215] "Topology Admit Handler" podUID="b63f1a47c74ab2557b8d7504f175d173" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.324247 kubelet[2245]: I0213 20:10:34.323795 2245 topology_manager.go:215] "Topology Admit Handler" podUID="3774d868f066a555fa6d89f102fcb0f1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.334585 systemd[1]: Created slice kubepods-burstable-pod7f657a77706ff686754ca12f83833a48.slice - libcontainer container kubepods-burstable-pod7f657a77706ff686754ca12f83833a48.slice. Feb 13 20:10:34.351601 systemd[1]: Created slice kubepods-burstable-podb63f1a47c74ab2557b8d7504f175d173.slice - libcontainer container kubepods-burstable-podb63f1a47c74ab2557b8d7504f175d173.slice. Feb 13 20:10:34.358743 systemd[1]: Created slice kubepods-burstable-pod3774d868f066a555fa6d89f102fcb0f1.slice - libcontainer container kubepods-burstable-pod3774d868f066a555fa6d89f102fcb0f1.slice. Feb 13 20:10:34.375049 kubelet[2245]: E0213 20:10:34.374970 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="400ms" Feb 13 20:10:34.472448 kubelet[2245]: I0213 20:10:34.472379 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.473118 kubelet[2245]: I0213 20:10:34.472452 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.473118 kubelet[2245]: I0213 20:10:34.472498 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.473118 kubelet[2245]: I0213 20:10:34.472531 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.473118 kubelet[2245]: I0213 20:10:34.472572 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f657a77706ff686754ca12f83833a48-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"7f657a77706ff686754ca12f83833a48\") " pod="kube-system/kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.473299 kubelet[2245]: I0213 20:10:34.472603 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b63f1a47c74ab2557b8d7504f175d173-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"b63f1a47c74ab2557b8d7504f175d173\") " pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.473299 kubelet[2245]: I0213 20:10:34.472644 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b63f1a47c74ab2557b8d7504f175d173-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"b63f1a47c74ab2557b8d7504f175d173\") " pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.473299 kubelet[2245]: I0213 20:10:34.472681 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b63f1a47c74ab2557b8d7504f175d173-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"b63f1a47c74ab2557b8d7504f175d173\") " pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.473299 kubelet[2245]: I0213 20:10:34.472716 2245 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.482992 kubelet[2245]: I0213 20:10:34.482899 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.483396 kubelet[2245]: E0213 20:10:34.483355 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.647999 containerd[1476]: time="2025-02-13T20:10:34.647835739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,Uid:7f657a77706ff686754ca12f83833a48,Namespace:kube-system,Attempt:0,}" Feb 13 20:10:34.657230 containerd[1476]: time="2025-02-13T20:10:34.657174870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,Uid:b63f1a47c74ab2557b8d7504f175d173,Namespace:kube-system,Attempt:0,}" Feb 13 20:10:34.664428 containerd[1476]: time="2025-02-13T20:10:34.664025491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,Uid:3774d868f066a555fa6d89f102fcb0f1,Namespace:kube-system,Attempt:0,}" Feb 13 20:10:34.775677 kubelet[2245]: E0213 20:10:34.775598 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="800ms" Feb 13 20:10:34.888262 kubelet[2245]: I0213 20:10:34.888216 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:34.888777 kubelet[2245]: E0213 20:10:34.888687 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:35.019407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410702413.mount: Deactivated successfully. Feb 13 20:10:35.029617 containerd[1476]: time="2025-02-13T20:10:35.029546009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:10:35.030925 containerd[1476]: time="2025-02-13T20:10:35.030865629Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:10:35.032375 containerd[1476]: time="2025-02-13T20:10:35.032306155Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:10:35.033401 containerd[1476]: time="2025-02-13T20:10:35.033336243Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 20:10:35.035048 containerd[1476]: time="2025-02-13T20:10:35.034979989Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:10:35.036657 containerd[1476]: time="2025-02-13T20:10:35.036602353Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:10:35.037417 containerd[1476]: time="2025-02-13T20:10:35.037336509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:10:35.040683 containerd[1476]: time="2025-02-13T20:10:35.040546099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:10:35.044338 containerd[1476]: time="2025-02-13T20:10:35.043617365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 395.669046ms" Feb 13 20:10:35.046130 containerd[1476]: time="2025-02-13T20:10:35.045790434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 388.508091ms" Feb 13 20:10:35.050677 containerd[1476]: time="2025-02-13T20:10:35.050612964Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 386.478602ms" Feb 13 20:10:35.155251 kubelet[2245]: W0213 20:10:35.155078 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:35.155251 kubelet[2245]: E0213 20:10:35.155217 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:35.178747 kubelet[2245]: W0213 20:10:35.178685 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:35.178747 kubelet[2245]: E0213 20:10:35.178760 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:35.243141 kubelet[2245]: W0213 20:10:35.242526 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:35.243141 kubelet[2245]: E0213 20:10:35.242632 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:35.272822 containerd[1476]: time="2025-02-13T20:10:35.270399835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:35.274911 containerd[1476]: time="2025-02-13T20:10:35.273739280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:35.275533 containerd[1476]: time="2025-02-13T20:10:35.275372818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:35.279599 containerd[1476]: time="2025-02-13T20:10:35.277344605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:35.279599 containerd[1476]: time="2025-02-13T20:10:35.277497206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:35.280415 containerd[1476]: time="2025-02-13T20:10:35.280027295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:35.280822 containerd[1476]: time="2025-02-13T20:10:35.280765314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:35.285044 containerd[1476]: time="2025-02-13T20:10:35.284635791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:35.285044 containerd[1476]: time="2025-02-13T20:10:35.284716777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:35.285044 containerd[1476]: time="2025-02-13T20:10:35.284743883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:35.285044 containerd[1476]: time="2025-02-13T20:10:35.284860752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:35.287266 containerd[1476]: time="2025-02-13T20:10:35.283710089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:35.338436 systemd[1]: Started cri-containerd-511b47fe0e30dcb69c76cc743b484ece7ac65d1e6538a57932e2877f97e6a3e5.scope - libcontainer container 511b47fe0e30dcb69c76cc743b484ece7ac65d1e6538a57932e2877f97e6a3e5. Feb 13 20:10:35.340720 systemd[1]: Started cri-containerd-812ceee0b6e82eb900fd24d3e103704a89eaeced28005b6f687cb83eff5b93e6.scope - libcontainer container 812ceee0b6e82eb900fd24d3e103704a89eaeced28005b6f687cb83eff5b93e6. Feb 13 20:10:35.345630 kubelet[2245]: W0213 20:10:35.342555 2245 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:35.343972 systemd[1]: Started cri-containerd-a73f6c877fa389736208b58cdc90c3adb124f4aaab68cd74cf7eef79195a1ef8.scope - libcontainer container a73f6c877fa389736208b58cdc90c3adb124f4aaab68cd74cf7eef79195a1ef8. Feb 13 20:10:35.347563 kubelet[2245]: E0213 20:10:35.346202 2245 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Feb 13 20:10:35.445081 containerd[1476]: time="2025-02-13T20:10:35.445008024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,Uid:7f657a77706ff686754ca12f83833a48,Namespace:kube-system,Attempt:0,} returns sandbox id \"812ceee0b6e82eb900fd24d3e103704a89eaeced28005b6f687cb83eff5b93e6\"" Feb 13 20:10:35.449376 kubelet[2245]: E0213 20:10:35.448853 2245 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-21291" Feb 13 20:10:35.456129 containerd[1476]: time="2025-02-13T20:10:35.454571745Z" level=info msg="CreateContainer within sandbox \"812ceee0b6e82eb900fd24d3e103704a89eaeced28005b6f687cb83eff5b93e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:10:35.479497 containerd[1476]: time="2025-02-13T20:10:35.479433052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,Uid:3774d868f066a555fa6d89f102fcb0f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a73f6c877fa389736208b58cdc90c3adb124f4aaab68cd74cf7eef79195a1ef8\"" Feb 13 20:10:35.482882 kubelet[2245]: E0213 20:10:35.482825 2245 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flat" Feb 13 20:10:35.485375 containerd[1476]: time="2025-02-13T20:10:35.485321936Z" level=info msg="CreateContainer within sandbox \"a73f6c877fa389736208b58cdc90c3adb124f4aaab68cd74cf7eef79195a1ef8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:10:35.492659 containerd[1476]: time="2025-02-13T20:10:35.492546607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal,Uid:b63f1a47c74ab2557b8d7504f175d173,Namespace:kube-system,Attempt:0,} returns sandbox id \"511b47fe0e30dcb69c76cc743b484ece7ac65d1e6538a57932e2877f97e6a3e5\"" Feb 13 20:10:35.494962 kubelet[2245]: E0213 20:10:35.494900 2245 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-21291" Feb 13 20:10:35.496556 containerd[1476]: time="2025-02-13T20:10:35.496515537Z" level=info msg="CreateContainer within sandbox \"812ceee0b6e82eb900fd24d3e103704a89eaeced28005b6f687cb83eff5b93e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ec48b15db882354f26bcbe31c5d97d3dd18057e3dbf2d26adf560806067881a\"" Feb 13 20:10:35.497842 containerd[1476]: time="2025-02-13T20:10:35.497802518Z" level=info msg="StartContainer for \"6ec48b15db882354f26bcbe31c5d97d3dd18057e3dbf2d26adf560806067881a\"" Feb 13 20:10:35.498597 containerd[1476]: time="2025-02-13T20:10:35.498551652Z" level=info msg="CreateContainer within sandbox \"511b47fe0e30dcb69c76cc743b484ece7ac65d1e6538a57932e2877f97e6a3e5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:10:35.524183 containerd[1476]: time="2025-02-13T20:10:35.523327133Z" level=info msg="CreateContainer within sandbox \"a73f6c877fa389736208b58cdc90c3adb124f4aaab68cd74cf7eef79195a1ef8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e15afc5bbb61666c377515ae60615b9861c10ec3c81514d0aae6abaf59c8af3c\"" Feb 13 20:10:35.526126 containerd[1476]: time="2025-02-13T20:10:35.524782943Z" level=info msg="StartContainer for \"e15afc5bbb61666c377515ae60615b9861c10ec3c81514d0aae6abaf59c8af3c\"" Feb 13 20:10:35.539407 containerd[1476]: time="2025-02-13T20:10:35.539343724Z" level=info msg="CreateContainer within sandbox \"511b47fe0e30dcb69c76cc743b484ece7ac65d1e6538a57932e2877f97e6a3e5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"76230f07b0d9db0ce2d4e336ee7ddb09c8c3ccc5f6f01616cca912a33485ed1d\"" Feb 13 20:10:35.544146 containerd[1476]: time="2025-02-13T20:10:35.542659803Z" level=info msg="StartContainer for \"76230f07b0d9db0ce2d4e336ee7ddb09c8c3ccc5f6f01616cca912a33485ed1d\"" Feb 13 20:10:35.557631 systemd[1]: Started cri-containerd-6ec48b15db882354f26bcbe31c5d97d3dd18057e3dbf2d26adf560806067881a.scope - libcontainer container 6ec48b15db882354f26bcbe31c5d97d3dd18057e3dbf2d26adf560806067881a. Feb 13 20:10:35.579127 kubelet[2245]: E0213 20:10:35.577175 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="1.6s" Feb 13 20:10:35.626385 systemd[1]: Started cri-containerd-76230f07b0d9db0ce2d4e336ee7ddb09c8c3ccc5f6f01616cca912a33485ed1d.scope - libcontainer container 76230f07b0d9db0ce2d4e336ee7ddb09c8c3ccc5f6f01616cca912a33485ed1d. Feb 13 20:10:35.629542 systemd[1]: Started cri-containerd-e15afc5bbb61666c377515ae60615b9861c10ec3c81514d0aae6abaf59c8af3c.scope - libcontainer container e15afc5bbb61666c377515ae60615b9861c10ec3c81514d0aae6abaf59c8af3c. Feb 13 20:10:35.692218 containerd[1476]: time="2025-02-13T20:10:35.689801311Z" level=info msg="StartContainer for \"6ec48b15db882354f26bcbe31c5d97d3dd18057e3dbf2d26adf560806067881a\" returns successfully" Feb 13 20:10:35.697142 kubelet[2245]: I0213 20:10:35.697072 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:35.698228 kubelet[2245]: E0213 20:10:35.697599 2245 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:35.778575 containerd[1476]: time="2025-02-13T20:10:35.778250793Z" level=info msg="StartContainer for \"e15afc5bbb61666c377515ae60615b9861c10ec3c81514d0aae6abaf59c8af3c\" returns successfully" Feb 13 20:10:35.778575 containerd[1476]: time="2025-02-13T20:10:35.778386043Z" level=info msg="StartContainer for \"76230f07b0d9db0ce2d4e336ee7ddb09c8c3ccc5f6f01616cca912a33485ed1d\" returns successfully" Feb 13 20:10:37.112172 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 20:10:37.303662 kubelet[2245]: I0213 20:10:37.303470 2245 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:38.468079 kubelet[2245]: I0213 20:10:38.468025 2245 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:38.567702 kubelet[2245]: E0213 20:10:38.567646 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 20:10:39.133222 kubelet[2245]: I0213 20:10:39.133174 2245 apiserver.go:52] "Watching apiserver" Feb 13 20:10:39.171957 kubelet[2245]: I0213 20:10:39.171915 2245 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:10:39.626519 kubelet[2245]: W0213 20:10:39.626466 2245 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:40.313173 systemd[1]: Reloading requested from client PID 2522 ('systemctl') (unit session-7.scope)... Feb 13 20:10:40.313201 systemd[1]: Reloading... Feb 13 20:10:40.376783 kubelet[2245]: W0213 20:10:40.376619 2245 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:40.464230 zram_generator::config[2562]: No configuration found. Feb 13 20:10:40.640620 kubelet[2245]: W0213 20:10:40.640458 2245 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:40.660968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:10:40.843660 systemd[1]: Reloading finished in 529 ms. Feb 13 20:10:40.908417 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:10:40.924025 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:10:40.924449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:10:40.924531 systemd[1]: kubelet.service: Consumed 1.361s CPU time, 115.6M memory peak, 0B memory swap peak. Feb 13 20:10:40.932786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:10:41.172400 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:10:41.185807 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:10:41.272062 kubelet[2611]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:10:41.272062 kubelet[2611]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:10:41.272062 kubelet[2611]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:10:41.272653 kubelet[2611]: I0213 20:10:41.272128 2611 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:10:41.278416 kubelet[2611]: I0213 20:10:41.278367 2611 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 20:10:41.278416 kubelet[2611]: I0213 20:10:41.278399 2611 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:10:41.279361 kubelet[2611]: I0213 20:10:41.278894 2611 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 20:10:41.287137 kubelet[2611]: I0213 20:10:41.284890 2611 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:10:41.288984 kubelet[2611]: I0213 20:10:41.288948 2611 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:10:41.305136 kubelet[2611]: I0213 20:10:41.304938 2611 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:10:41.306313 kubelet[2611]: I0213 20:10:41.306058 2611 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:10:41.307951 kubelet[2611]: I0213 20:10:41.306170 2611 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 20:10:41.307951 kubelet[2611]: I0213 20:10:41.307427 2611 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:10:41.307951 kubelet[2611]: I0213 20:10:41.307449 2611 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 20:10:41.307951 kubelet[2611]: I0213 20:10:41.307523 2611 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:10:41.308767 kubelet[2611]: I0213 20:10:41.308492 2611 kubelet.go:400] "Attempting to sync node with API server" Feb 13 20:10:41.308767 kubelet[2611]: I0213 20:10:41.308611 2611 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:10:41.308767 kubelet[2611]: I0213 20:10:41.308650 2611 kubelet.go:312] "Adding apiserver pod source" Feb 13 20:10:41.308767 kubelet[2611]: I0213 20:10:41.308677 2611 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:10:41.310334 kubelet[2611]: I0213 20:10:41.310001 2611 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:10:41.311286 kubelet[2611]: I0213 20:10:41.310812 2611 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:10:41.314111 kubelet[2611]: I0213 20:10:41.312066 2611 server.go:1264] "Started kubelet" Feb 13 20:10:41.319738 kubelet[2611]: I0213 20:10:41.319594 2611 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:10:41.334627 kubelet[2611]: I0213 20:10:41.334573 2611 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:10:41.340790 kubelet[2611]: I0213 20:10:41.339469 2611 server.go:455] "Adding debug handlers to kubelet server" Feb 13 20:10:41.345307 kubelet[2611]: I0213 20:10:41.345030 2611 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:10:41.349479 kubelet[2611]: I0213 20:10:41.348769 2611 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:10:41.349479 kubelet[2611]: I0213 20:10:41.348891 2611 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 20:10:41.355083 kubelet[2611]: I0213 20:10:41.355038 2611 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:10:41.365014 kubelet[2611]: I0213 20:10:41.364970 2611 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:10:41.370023 kubelet[2611]: I0213 20:10:41.369943 2611 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:10:41.371046 kubelet[2611]: I0213 20:10:41.371005 2611 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:10:41.387136 kubelet[2611]: I0213 20:10:41.386192 2611 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:10:41.388168 kubelet[2611]: E0213 20:10:41.388134 2611 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:10:41.402833 kubelet[2611]: I0213 20:10:41.402783 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:10:41.406334 kubelet[2611]: I0213 20:10:41.406283 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:10:41.406584 kubelet[2611]: I0213 20:10:41.406563 2611 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:10:41.406769 kubelet[2611]: I0213 20:10:41.406740 2611 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 20:10:41.406966 kubelet[2611]: E0213 20:10:41.406939 2611 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:10:41.466576 kubelet[2611]: I0213 20:10:41.466451 2611 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.489330 kubelet[2611]: I0213 20:10:41.489016 2611 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.489330 kubelet[2611]: I0213 20:10:41.489206 2611 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.507773 kubelet[2611]: E0213 20:10:41.507723 2611 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:10:41.513857 kubelet[2611]: I0213 20:10:41.513678 2611 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:10:41.514234 kubelet[2611]: I0213 20:10:41.513774 2611 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:10:41.514234 kubelet[2611]: I0213 20:10:41.514154 2611 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:10:41.514943 kubelet[2611]: I0213 20:10:41.514917 2611 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:10:41.515297 kubelet[2611]: I0213 20:10:41.515076 2611 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:10:41.515297 kubelet[2611]: I0213 20:10:41.515197 2611 policy_none.go:49] "None policy: Start" Feb 13 20:10:41.516987 kubelet[2611]: I0213 20:10:41.516840 2611 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:10:41.516987 kubelet[2611]: I0213 20:10:41.516872 2611 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:10:41.517720 kubelet[2611]: I0213 20:10:41.517592 2611 state_mem.go:75] "Updated machine memory state" Feb 13 20:10:41.529163 kubelet[2611]: I0213 20:10:41.528740 2611 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:10:41.529163 kubelet[2611]: I0213 20:10:41.529026 2611 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:10:41.530013 kubelet[2611]: I0213 20:10:41.529684 2611 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:10:41.708639 kubelet[2611]: I0213 20:10:41.708571 2611 topology_manager.go:215] "Topology Admit Handler" podUID="7f657a77706ff686754ca12f83833a48" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.709122 kubelet[2611]: I0213 20:10:41.708753 2611 topology_manager.go:215] "Topology Admit Handler" podUID="b63f1a47c74ab2557b8d7504f175d173" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.709473 kubelet[2611]: I0213 20:10:41.709219 2611 topology_manager.go:215] "Topology Admit Handler" podUID="3774d868f066a555fa6d89f102fcb0f1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.725600 kubelet[2611]: W0213 20:10:41.725464 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:41.725600 kubelet[2611]: E0213 20:10:41.725557 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.728421 kubelet[2611]: W0213 20:10:41.728183 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:41.728421 kubelet[2611]: E0213 20:10:41.728273 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.728659 kubelet[2611]: W0213 20:10:41.728571 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:41.728659 kubelet[2611]: E0213 20:10:41.728627 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.767834 kubelet[2611]: I0213 20:10:41.767777 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7f657a77706ff686754ca12f83833a48-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"7f657a77706ff686754ca12f83833a48\") " pod="kube-system/kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.768080 kubelet[2611]: I0213 20:10:41.767860 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.768080 kubelet[2611]: I0213 20:10:41.767893 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.768080 kubelet[2611]: I0213 20:10:41.767927 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.768080 kubelet[2611]: I0213 20:10:41.767961 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.768746 kubelet[2611]: I0213 20:10:41.767995 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b63f1a47c74ab2557b8d7504f175d173-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"b63f1a47c74ab2557b8d7504f175d173\") " pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.768746 kubelet[2611]: I0213 20:10:41.768024 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b63f1a47c74ab2557b8d7504f175d173-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"b63f1a47c74ab2557b8d7504f175d173\") " pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.768746 kubelet[2611]: I0213 20:10:41.768052 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b63f1a47c74ab2557b8d7504f175d173-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"b63f1a47c74ab2557b8d7504f175d173\") " pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:41.768746 kubelet[2611]: I0213 20:10:41.768078 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3774d868f066a555fa6d89f102fcb0f1-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" (UID: \"3774d868f066a555fa6d89f102fcb0f1\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:42.322481 kubelet[2611]: I0213 20:10:42.322429 2611 apiserver.go:52] "Watching apiserver" Feb 13 20:10:42.356069 kubelet[2611]: I0213 20:10:42.356016 2611 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:10:42.488918 kubelet[2611]: W0213 20:10:42.488862 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:42.489131 kubelet[2611]: E0213 20:10:42.488962 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:42.496215 kubelet[2611]: W0213 20:10:42.496171 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:42.496397 kubelet[2611]: E0213 20:10:42.496267 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:42.540648 kubelet[2611]: W0213 20:10:42.540588 2611 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:10:42.540840 kubelet[2611]: E0213 20:10:42.540690 2611 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:10:42.631432 kubelet[2611]: I0213 20:10:42.631220 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" podStartSLOduration=3.631160585 podStartE2EDuration="3.631160585s" podCreationTimestamp="2025-02-13 20:10:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:42.619723837 +0000 UTC m=+1.425407480" watchObservedRunningTime="2025-02-13 20:10:42.631160585 +0000 UTC m=+1.436844229" Feb 13 20:10:42.632380 kubelet[2611]: I0213 20:10:42.631846 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" podStartSLOduration=2.6315000360000003 podStartE2EDuration="2.631500036s" podCreationTimestamp="2025-02-13 20:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:42.582377862 +0000 UTC m=+1.388061507" watchObservedRunningTime="2025-02-13 20:10:42.631500036 +0000 UTC m=+1.437183706" Feb 13 20:10:47.196231 sudo[1714]: pam_unix(sudo:session): session closed for user root Feb 13 20:10:47.230827 kubelet[2611]: I0213 20:10:47.230747 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" podStartSLOduration=7.230722725 podStartE2EDuration="7.230722725s" podCreationTimestamp="2025-02-13 20:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:42.663179776 +0000 UTC m=+1.468863418" watchObservedRunningTime="2025-02-13 20:10:47.230722725 +0000 UTC m=+6.036406371" Feb 13 20:10:47.240036 sshd[1711]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:47.246106 systemd[1]: sshd@6-10.128.0.67:22-139.178.89.65:33586.service: Deactivated successfully. Feb 13 20:10:47.249837 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:10:47.250423 systemd[1]: session-7.scope: Consumed 6.919s CPU time, 191.2M memory peak, 0B memory swap peak. Feb 13 20:10:47.253728 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:10:47.255978 systemd-logind[1446]: Removed session 7. Feb 13 20:10:51.725190 update_engine[1447]: I20250213 20:10:51.725060 1447 update_attempter.cc:509] Updating boot flags... Feb 13 20:10:51.818438 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2700) Feb 13 20:10:51.935625 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2703) Feb 13 20:10:52.085135 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2703) Feb 13 20:10:54.043241 kubelet[2611]: I0213 20:10:54.042703 2611 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:10:54.046761 containerd[1476]: time="2025-02-13T20:10:54.046045102Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:10:54.047365 kubelet[2611]: I0213 20:10:54.046455 2611 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:10:54.888216 kubelet[2611]: I0213 20:10:54.886451 2611 topology_manager.go:215] "Topology Admit Handler" podUID="45282b3b-4d0c-4c2d-a541-20aaa508faff" podNamespace="kube-system" podName="kube-proxy-9hbb8" Feb 13 20:10:54.904042 systemd[1]: Created slice kubepods-besteffort-pod45282b3b_4d0c_4c2d_a541_20aaa508faff.slice - libcontainer container kubepods-besteffort-pod45282b3b_4d0c_4c2d_a541_20aaa508faff.slice. Feb 13 20:10:54.954431 kubelet[2611]: I0213 20:10:54.954377 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/45282b3b-4d0c-4c2d-a541-20aaa508faff-kube-proxy\") pod \"kube-proxy-9hbb8\" (UID: \"45282b3b-4d0c-4c2d-a541-20aaa508faff\") " pod="kube-system/kube-proxy-9hbb8" Feb 13 20:10:54.954431 kubelet[2611]: I0213 20:10:54.954439 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45282b3b-4d0c-4c2d-a541-20aaa508faff-lib-modules\") pod \"kube-proxy-9hbb8\" (UID: \"45282b3b-4d0c-4c2d-a541-20aaa508faff\") " pod="kube-system/kube-proxy-9hbb8" Feb 13 20:10:54.954431 kubelet[2611]: I0213 20:10:54.954481 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blk54\" (UniqueName: \"kubernetes.io/projected/45282b3b-4d0c-4c2d-a541-20aaa508faff-kube-api-access-blk54\") pod \"kube-proxy-9hbb8\" (UID: \"45282b3b-4d0c-4c2d-a541-20aaa508faff\") " pod="kube-system/kube-proxy-9hbb8" Feb 13 20:10:54.954431 kubelet[2611]: I0213 20:10:54.954520 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45282b3b-4d0c-4c2d-a541-20aaa508faff-xtables-lock\") pod \"kube-proxy-9hbb8\" (UID: \"45282b3b-4d0c-4c2d-a541-20aaa508faff\") " pod="kube-system/kube-proxy-9hbb8" Feb 13 20:10:55.135130 kubelet[2611]: I0213 20:10:55.131941 2611 topology_manager.go:215] "Topology Admit Handler" podUID="160ad1f0-087e-4a3b-b594-60e60dd4d6bc" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-858wz" Feb 13 20:10:55.150160 systemd[1]: Created slice kubepods-besteffort-pod160ad1f0_087e_4a3b_b594_60e60dd4d6bc.slice - libcontainer container kubepods-besteffort-pod160ad1f0_087e_4a3b_b594_60e60dd4d6bc.slice. Feb 13 20:10:55.155573 kubelet[2611]: I0213 20:10:55.155537 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m74d5\" (UniqueName: \"kubernetes.io/projected/160ad1f0-087e-4a3b-b594-60e60dd4d6bc-kube-api-access-m74d5\") pod \"tigera-operator-7bc55997bb-858wz\" (UID: \"160ad1f0-087e-4a3b-b594-60e60dd4d6bc\") " pod="tigera-operator/tigera-operator-7bc55997bb-858wz" Feb 13 20:10:55.155748 kubelet[2611]: I0213 20:10:55.155587 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/160ad1f0-087e-4a3b-b594-60e60dd4d6bc-var-lib-calico\") pod \"tigera-operator-7bc55997bb-858wz\" (UID: \"160ad1f0-087e-4a3b-b594-60e60dd4d6bc\") " pod="tigera-operator/tigera-operator-7bc55997bb-858wz" Feb 13 20:10:55.214728 containerd[1476]: time="2025-02-13T20:10:55.214676659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9hbb8,Uid:45282b3b-4d0c-4c2d-a541-20aaa508faff,Namespace:kube-system,Attempt:0,}" Feb 13 20:10:55.259761 containerd[1476]: time="2025-02-13T20:10:55.259615857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:55.260541 containerd[1476]: time="2025-02-13T20:10:55.259724661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:55.260541 containerd[1476]: time="2025-02-13T20:10:55.259757699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:55.260541 containerd[1476]: time="2025-02-13T20:10:55.259909707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:55.314405 systemd[1]: Started cri-containerd-c5d2369ff5f0fd73c90a9b2b1dfcb01b431240c79770e2ca13fd0c263dca3bde.scope - libcontainer container c5d2369ff5f0fd73c90a9b2b1dfcb01b431240c79770e2ca13fd0c263dca3bde. Feb 13 20:10:55.352531 containerd[1476]: time="2025-02-13T20:10:55.352257837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9hbb8,Uid:45282b3b-4d0c-4c2d-a541-20aaa508faff,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5d2369ff5f0fd73c90a9b2b1dfcb01b431240c79770e2ca13fd0c263dca3bde\"" Feb 13 20:10:55.357834 containerd[1476]: time="2025-02-13T20:10:55.357737493Z" level=info msg="CreateContainer within sandbox \"c5d2369ff5f0fd73c90a9b2b1dfcb01b431240c79770e2ca13fd0c263dca3bde\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:10:55.378461 containerd[1476]: time="2025-02-13T20:10:55.378374308Z" level=info msg="CreateContainer within sandbox \"c5d2369ff5f0fd73c90a9b2b1dfcb01b431240c79770e2ca13fd0c263dca3bde\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"af16a0560a3880a1e15cb472567890ad13444d52525e3a27b2c6983e0469bd2b\"" Feb 13 20:10:55.379533 containerd[1476]: time="2025-02-13T20:10:55.379491385Z" level=info msg="StartContainer for \"af16a0560a3880a1e15cb472567890ad13444d52525e3a27b2c6983e0469bd2b\"" Feb 13 20:10:55.424367 systemd[1]: Started cri-containerd-af16a0560a3880a1e15cb472567890ad13444d52525e3a27b2c6983e0469bd2b.scope - libcontainer container af16a0560a3880a1e15cb472567890ad13444d52525e3a27b2c6983e0469bd2b. Feb 13 20:10:55.457456 containerd[1476]: time="2025-02-13T20:10:55.456926540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-858wz,Uid:160ad1f0-087e-4a3b-b594-60e60dd4d6bc,Namespace:tigera-operator,Attempt:0,}" Feb 13 20:10:55.473223 containerd[1476]: time="2025-02-13T20:10:55.472444512Z" level=info msg="StartContainer for \"af16a0560a3880a1e15cb472567890ad13444d52525e3a27b2c6983e0469bd2b\" returns successfully" Feb 13 20:10:55.514782 containerd[1476]: time="2025-02-13T20:10:55.514371275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:55.516379 containerd[1476]: time="2025-02-13T20:10:55.514635053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:55.516379 containerd[1476]: time="2025-02-13T20:10:55.516171920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:55.518117 containerd[1476]: time="2025-02-13T20:10:55.517196996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:55.553444 systemd[1]: Started cri-containerd-bf8acf4706bbb6165999e5de35799039614ce938465a4829ca5ec2b3709275fe.scope - libcontainer container bf8acf4706bbb6165999e5de35799039614ce938465a4829ca5ec2b3709275fe. Feb 13 20:10:55.644669 containerd[1476]: time="2025-02-13T20:10:55.644495231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-858wz,Uid:160ad1f0-087e-4a3b-b594-60e60dd4d6bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bf8acf4706bbb6165999e5de35799039614ce938465a4829ca5ec2b3709275fe\"" Feb 13 20:10:55.649720 containerd[1476]: time="2025-02-13T20:10:55.649514355Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 20:10:57.069634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873212910.mount: Deactivated successfully. Feb 13 20:10:57.844495 containerd[1476]: time="2025-02-13T20:10:57.844419581Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:57.846071 containerd[1476]: time="2025-02-13T20:10:57.845990556Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 20:10:57.847509 containerd[1476]: time="2025-02-13T20:10:57.847470343Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:57.853030 containerd[1476]: time="2025-02-13T20:10:57.852944483Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:10:57.855082 containerd[1476]: time="2025-02-13T20:10:57.854155594Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.204567734s" Feb 13 20:10:57.855082 containerd[1476]: time="2025-02-13T20:10:57.854206146Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 20:10:57.857712 containerd[1476]: time="2025-02-13T20:10:57.857457876Z" level=info msg="CreateContainer within sandbox \"bf8acf4706bbb6165999e5de35799039614ce938465a4829ca5ec2b3709275fe\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 20:10:57.875990 containerd[1476]: time="2025-02-13T20:10:57.875921781Z" level=info msg="CreateContainer within sandbox \"bf8acf4706bbb6165999e5de35799039614ce938465a4829ca5ec2b3709275fe\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"da0c54eccb53a0ce28345e540edfbce1357936dd8735d008e20e37c6dda78d73\"" Feb 13 20:10:57.878468 containerd[1476]: time="2025-02-13T20:10:57.877082671Z" level=info msg="StartContainer for \"da0c54eccb53a0ce28345e540edfbce1357936dd8735d008e20e37c6dda78d73\"" Feb 13 20:10:57.929417 systemd[1]: Started cri-containerd-da0c54eccb53a0ce28345e540edfbce1357936dd8735d008e20e37c6dda78d73.scope - libcontainer container da0c54eccb53a0ce28345e540edfbce1357936dd8735d008e20e37c6dda78d73. Feb 13 20:10:57.972272 containerd[1476]: time="2025-02-13T20:10:57.972211642Z" level=info msg="StartContainer for \"da0c54eccb53a0ce28345e540edfbce1357936dd8735d008e20e37c6dda78d73\" returns successfully" Feb 13 20:10:58.512023 kubelet[2611]: I0213 20:10:58.511652 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9hbb8" podStartSLOduration=4.511630587 podStartE2EDuration="4.511630587s" podCreationTimestamp="2025-02-13 20:10:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:55.516992276 +0000 UTC m=+14.322675920" watchObservedRunningTime="2025-02-13 20:10:58.511630587 +0000 UTC m=+17.317314231" Feb 13 20:11:01.488451 kubelet[2611]: I0213 20:11:01.488302 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-858wz" podStartSLOduration=4.281495881 podStartE2EDuration="6.488248044s" podCreationTimestamp="2025-02-13 20:10:55 +0000 UTC" firstStartedPulling="2025-02-13 20:10:55.648507284 +0000 UTC m=+14.454190919" lastFinishedPulling="2025-02-13 20:10:57.855259451 +0000 UTC m=+16.660943082" observedRunningTime="2025-02-13 20:10:58.511941214 +0000 UTC m=+17.317624860" watchObservedRunningTime="2025-02-13 20:11:01.488248044 +0000 UTC m=+20.293931694" Feb 13 20:11:01.783229 kubelet[2611]: I0213 20:11:01.782209 2611 topology_manager.go:215] "Topology Admit Handler" podUID="6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e" podNamespace="calico-system" podName="calico-typha-7795f9f674-fp5bd" Feb 13 20:11:01.799934 systemd[1]: Created slice kubepods-besteffort-pod6d4bc3f3_dc41_4720_bd5b_dfa158e88e7e.slice - libcontainer container kubepods-besteffort-pod6d4bc3f3_dc41_4720_bd5b_dfa158e88e7e.slice. Feb 13 20:11:01.801974 kubelet[2611]: W0213 20:11:01.801343 2611 reflector.go:547] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' and this object Feb 13 20:11:01.801974 kubelet[2611]: E0213 20:11:01.801410 2611 reflector.go:150] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' and this object Feb 13 20:11:01.801974 kubelet[2611]: W0213 20:11:01.801474 2611 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' and this object Feb 13 20:11:01.801974 kubelet[2611]: E0213 20:11:01.801491 2611 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' and this object Feb 13 20:11:01.802316 kubelet[2611]: W0213 20:11:01.801545 2611 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' and this object Feb 13 20:11:01.802316 kubelet[2611]: E0213 20:11:01.801560 2611 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' and this object Feb 13 20:11:01.902342 kubelet[2611]: I0213 20:11:01.902286 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-typha-certs\") pod \"calico-typha-7795f9f674-fp5bd\" (UID: \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\") " pod="calico-system/calico-typha-7795f9f674-fp5bd" Feb 13 20:11:01.902550 kubelet[2611]: I0213 20:11:01.902356 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znwr4\" (UniqueName: \"kubernetes.io/projected/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-kube-api-access-znwr4\") pod \"calico-typha-7795f9f674-fp5bd\" (UID: \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\") " pod="calico-system/calico-typha-7795f9f674-fp5bd" Feb 13 20:11:01.902550 kubelet[2611]: I0213 20:11:01.902392 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-tigera-ca-bundle\") pod \"calico-typha-7795f9f674-fp5bd\" (UID: \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\") " pod="calico-system/calico-typha-7795f9f674-fp5bd" Feb 13 20:11:01.934704 kubelet[2611]: I0213 20:11:01.934630 2611 topology_manager.go:215] "Topology Admit Handler" podUID="7e03ae09-2522-4225-9b0e-f0ca6b4697b9" podNamespace="calico-system" podName="calico-node-g4khm" Feb 13 20:11:01.952401 systemd[1]: Created slice kubepods-besteffort-pod7e03ae09_2522_4225_9b0e_f0ca6b4697b9.slice - libcontainer container kubepods-besteffort-pod7e03ae09_2522_4225_9b0e_f0ca6b4697b9.slice. Feb 13 20:11:02.002652 kubelet[2611]: I0213 20:11:02.002592 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-var-lib-calico\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.002652 kubelet[2611]: I0213 20:11:02.002653 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-bin-dir\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.003361 kubelet[2611]: I0213 20:11:02.002689 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp94q\" (UniqueName: \"kubernetes.io/projected/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-kube-api-access-tp94q\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.003361 kubelet[2611]: I0213 20:11:02.002741 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-node-certs\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.003361 kubelet[2611]: I0213 20:11:02.002776 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-flexvol-driver-host\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.003361 kubelet[2611]: I0213 20:11:02.002833 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-xtables-lock\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.003361 kubelet[2611]: I0213 20:11:02.002863 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-tigera-ca-bundle\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.005634 kubelet[2611]: I0213 20:11:02.002896 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-net-dir\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.005634 kubelet[2611]: I0213 20:11:02.002984 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-policysync\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.005634 kubelet[2611]: I0213 20:11:02.003015 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-var-run-calico\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.005634 kubelet[2611]: I0213 20:11:02.003044 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-log-dir\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.005634 kubelet[2611]: I0213 20:11:02.003072 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-lib-modules\") pod \"calico-node-g4khm\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " pod="calico-system/calico-node-g4khm" Feb 13 20:11:02.062770 kubelet[2611]: I0213 20:11:02.061077 2611 topology_manager.go:215] "Topology Admit Handler" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" podNamespace="calico-system" podName="csi-node-driver-bnc68" Feb 13 20:11:02.062770 kubelet[2611]: E0213 20:11:02.061534 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnc68" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" Feb 13 20:11:02.103560 kubelet[2611]: I0213 20:11:02.103490 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00ae0c73-92db-4a9c-a76b-7c749f976739-socket-dir\") pod \"csi-node-driver-bnc68\" (UID: \"00ae0c73-92db-4a9c-a76b-7c749f976739\") " pod="calico-system/csi-node-driver-bnc68" Feb 13 20:11:02.103796 kubelet[2611]: I0213 20:11:02.103753 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/00ae0c73-92db-4a9c-a76b-7c749f976739-varrun\") pod \"csi-node-driver-bnc68\" (UID: \"00ae0c73-92db-4a9c-a76b-7c749f976739\") " pod="calico-system/csi-node-driver-bnc68" Feb 13 20:11:02.103796 kubelet[2611]: I0213 20:11:02.103790 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00ae0c73-92db-4a9c-a76b-7c749f976739-kubelet-dir\") pod \"csi-node-driver-bnc68\" (UID: \"00ae0c73-92db-4a9c-a76b-7c749f976739\") " pod="calico-system/csi-node-driver-bnc68" Feb 13 20:11:02.103975 kubelet[2611]: I0213 20:11:02.103824 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00ae0c73-92db-4a9c-a76b-7c749f976739-registration-dir\") pod \"csi-node-driver-bnc68\" (UID: \"00ae0c73-92db-4a9c-a76b-7c749f976739\") " pod="calico-system/csi-node-driver-bnc68" Feb 13 20:11:02.103975 kubelet[2611]: I0213 20:11:02.103896 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgnzj\" (UniqueName: \"kubernetes.io/projected/00ae0c73-92db-4a9c-a76b-7c749f976739-kube-api-access-bgnzj\") pod \"csi-node-driver-bnc68\" (UID: \"00ae0c73-92db-4a9c-a76b-7c749f976739\") " pod="calico-system/csi-node-driver-bnc68" Feb 13 20:11:02.113849 kubelet[2611]: E0213 20:11:02.113784 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.114352 kubelet[2611]: W0213 20:11:02.114307 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.114352 kubelet[2611]: E0213 20:11:02.114342 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.124352 kubelet[2611]: E0213 20:11:02.124304 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.124352 kubelet[2611]: W0213 20:11:02.124348 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.124617 kubelet[2611]: E0213 20:11:02.124466 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.125585 kubelet[2611]: E0213 20:11:02.125555 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.125585 kubelet[2611]: W0213 20:11:02.125583 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.125779 kubelet[2611]: E0213 20:11:02.125688 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.127323 kubelet[2611]: E0213 20:11:02.126024 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.127323 kubelet[2611]: W0213 20:11:02.126038 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.127323 kubelet[2611]: E0213 20:11:02.126117 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.129838 kubelet[2611]: E0213 20:11:02.129809 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.130120 kubelet[2611]: W0213 20:11:02.129984 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.130120 kubelet[2611]: E0213 20:11:02.130047 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.130940 kubelet[2611]: E0213 20:11:02.130655 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.130940 kubelet[2611]: W0213 20:11:02.130677 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.130940 kubelet[2611]: E0213 20:11:02.130705 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.133160 kubelet[2611]: E0213 20:11:02.132430 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.133160 kubelet[2611]: W0213 20:11:02.132450 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.133160 kubelet[2611]: E0213 20:11:02.132569 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.135413 kubelet[2611]: E0213 20:11:02.134183 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.135413 kubelet[2611]: W0213 20:11:02.134202 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.135413 kubelet[2611]: E0213 20:11:02.135162 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.136114 kubelet[2611]: E0213 20:11:02.136014 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.136114 kubelet[2611]: W0213 20:11:02.136033 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.136114 kubelet[2611]: E0213 20:11:02.136056 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.205804 kubelet[2611]: E0213 20:11:02.205747 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.205804 kubelet[2611]: W0213 20:11:02.205784 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.205804 kubelet[2611]: E0213 20:11:02.205816 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.206262 kubelet[2611]: E0213 20:11:02.206236 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.206262 kubelet[2611]: W0213 20:11:02.206251 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.206798 kubelet[2611]: E0213 20:11:02.206269 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.206798 kubelet[2611]: E0213 20:11:02.206622 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.206798 kubelet[2611]: W0213 20:11:02.206637 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.206798 kubelet[2611]: E0213 20:11:02.206715 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.207639 kubelet[2611]: E0213 20:11:02.207121 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.207639 kubelet[2611]: W0213 20:11:02.207137 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.207639 kubelet[2611]: E0213 20:11:02.207172 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.207639 kubelet[2611]: E0213 20:11:02.207513 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.207639 kubelet[2611]: W0213 20:11:02.207526 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.207639 kubelet[2611]: E0213 20:11:02.207560 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.209389 kubelet[2611]: E0213 20:11:02.207979 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.209389 kubelet[2611]: W0213 20:11:02.207994 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.209389 kubelet[2611]: E0213 20:11:02.208017 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.209389 kubelet[2611]: E0213 20:11:02.208650 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.209389 kubelet[2611]: W0213 20:11:02.208666 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.209389 kubelet[2611]: E0213 20:11:02.208773 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.209389 kubelet[2611]: E0213 20:11:02.209070 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.209389 kubelet[2611]: W0213 20:11:02.209111 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.209389 kubelet[2611]: E0213 20:11:02.209130 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.209914 kubelet[2611]: E0213 20:11:02.209886 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.209914 kubelet[2611]: W0213 20:11:02.209910 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.210128 kubelet[2611]: E0213 20:11:02.209936 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.210410 kubelet[2611]: E0213 20:11:02.210387 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.210410 kubelet[2611]: W0213 20:11:02.210409 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.210576 kubelet[2611]: E0213 20:11:02.210459 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.211120 kubelet[2611]: E0213 20:11:02.211053 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.211120 kubelet[2611]: W0213 20:11:02.211074 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.211305 kubelet[2611]: E0213 20:11:02.211167 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.211670 kubelet[2611]: E0213 20:11:02.211625 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.211670 kubelet[2611]: W0213 20:11:02.211645 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.211670 kubelet[2611]: E0213 20:11:02.211669 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.213001 kubelet[2611]: E0213 20:11:02.212154 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.213001 kubelet[2611]: W0213 20:11:02.212171 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.213001 kubelet[2611]: E0213 20:11:02.212353 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.213001 kubelet[2611]: E0213 20:11:02.212725 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.213001 kubelet[2611]: W0213 20:11:02.212744 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.213745 kubelet[2611]: E0213 20:11:02.213707 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.214035 kubelet[2611]: E0213 20:11:02.214013 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.214035 kubelet[2611]: W0213 20:11:02.214035 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.214335 kubelet[2611]: E0213 20:11:02.214104 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.214732 kubelet[2611]: E0213 20:11:02.214387 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.214732 kubelet[2611]: W0213 20:11:02.214400 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.214732 kubelet[2611]: E0213 20:11:02.214500 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.215480 kubelet[2611]: E0213 20:11:02.214840 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.215480 kubelet[2611]: W0213 20:11:02.214854 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.215480 kubelet[2611]: E0213 20:11:02.214974 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.215480 kubelet[2611]: E0213 20:11:02.215258 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.215480 kubelet[2611]: W0213 20:11:02.215272 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.215480 kubelet[2611]: E0213 20:11:02.215386 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.216207 kubelet[2611]: E0213 20:11:02.215598 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.216207 kubelet[2611]: W0213 20:11:02.215610 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.216207 kubelet[2611]: E0213 20:11:02.215767 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.216207 kubelet[2611]: E0213 20:11:02.216023 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.216207 kubelet[2611]: W0213 20:11:02.216035 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.216207 kubelet[2611]: E0213 20:11:02.216066 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.217575 kubelet[2611]: E0213 20:11:02.216421 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.217575 kubelet[2611]: W0213 20:11:02.216434 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.217575 kubelet[2611]: E0213 20:11:02.216468 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.217575 kubelet[2611]: E0213 20:11:02.217328 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.217575 kubelet[2611]: W0213 20:11:02.217344 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.217575 kubelet[2611]: E0213 20:11:02.217367 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.218313 kubelet[2611]: E0213 20:11:02.217760 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.218313 kubelet[2611]: W0213 20:11:02.217775 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.218313 kubelet[2611]: E0213 20:11:02.217882 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.218313 kubelet[2611]: E0213 20:11:02.218181 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.218313 kubelet[2611]: W0213 20:11:02.218197 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.218580 kubelet[2611]: E0213 20:11:02.218321 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.220292 kubelet[2611]: E0213 20:11:02.220253 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.220521 kubelet[2611]: W0213 20:11:02.220278 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.220803 kubelet[2611]: E0213 20:11:02.220769 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.222036 kubelet[2611]: E0213 20:11:02.221987 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.222161 kubelet[2611]: W0213 20:11:02.222037 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.223118 kubelet[2611]: E0213 20:11:02.222627 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.223118 kubelet[2611]: W0213 20:11:02.222648 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.223118 kubelet[2611]: E0213 20:11:02.222981 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.223118 kubelet[2611]: W0213 20:11:02.222994 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.223118 kubelet[2611]: E0213 20:11:02.223011 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.223501 kubelet[2611]: E0213 20:11:02.223476 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.223596 kubelet[2611]: E0213 20:11:02.223522 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.224583 kubelet[2611]: E0213 20:11:02.223843 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.224583 kubelet[2611]: W0213 20:11:02.223861 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.224583 kubelet[2611]: E0213 20:11:02.223880 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.224583 kubelet[2611]: E0213 20:11:02.224504 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.224583 kubelet[2611]: W0213 20:11:02.224520 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.224583 kubelet[2611]: E0213 20:11:02.224538 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.319514 kubelet[2611]: E0213 20:11:02.319392 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.319962 kubelet[2611]: W0213 20:11:02.319721 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.319962 kubelet[2611]: E0213 20:11:02.319776 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.321570 kubelet[2611]: E0213 20:11:02.321367 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.321570 kubelet[2611]: W0213 20:11:02.321390 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.321570 kubelet[2611]: E0213 20:11:02.321414 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.323963 kubelet[2611]: E0213 20:11:02.322614 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.323963 kubelet[2611]: W0213 20:11:02.322637 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.323963 kubelet[2611]: E0213 20:11:02.322658 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.325339 kubelet[2611]: E0213 20:11:02.324703 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.325339 kubelet[2611]: W0213 20:11:02.324745 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.325339 kubelet[2611]: E0213 20:11:02.324769 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.327445 kubelet[2611]: E0213 20:11:02.326870 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.327445 kubelet[2611]: W0213 20:11:02.326893 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.327445 kubelet[2611]: E0213 20:11:02.326918 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.328681 kubelet[2611]: E0213 20:11:02.328435 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.328681 kubelet[2611]: W0213 20:11:02.328455 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.328681 kubelet[2611]: E0213 20:11:02.328475 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.430005 kubelet[2611]: E0213 20:11:02.429913 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.430005 kubelet[2611]: W0213 20:11:02.429948 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.430005 kubelet[2611]: E0213 20:11:02.429980 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.430512 kubelet[2611]: E0213 20:11:02.430462 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.430512 kubelet[2611]: W0213 20:11:02.430480 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.430512 kubelet[2611]: E0213 20:11:02.430501 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.431835 kubelet[2611]: E0213 20:11:02.431426 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.431835 kubelet[2611]: W0213 20:11:02.431450 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.431835 kubelet[2611]: E0213 20:11:02.431471 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.433242 kubelet[2611]: E0213 20:11:02.433172 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.433242 kubelet[2611]: W0213 20:11:02.433195 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.433242 kubelet[2611]: E0213 20:11:02.433215 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.433950 kubelet[2611]: E0213 20:11:02.433826 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.433950 kubelet[2611]: W0213 20:11:02.433846 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.433950 kubelet[2611]: E0213 20:11:02.433868 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.434403 kubelet[2611]: E0213 20:11:02.434270 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.434403 kubelet[2611]: W0213 20:11:02.434287 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.434403 kubelet[2611]: E0213 20:11:02.434306 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.536776 kubelet[2611]: E0213 20:11:02.536732 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.536776 kubelet[2611]: W0213 20:11:02.536776 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.537969 kubelet[2611]: E0213 20:11:02.536807 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.538198 kubelet[2611]: E0213 20:11:02.538171 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.538198 kubelet[2611]: W0213 20:11:02.538197 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.538390 kubelet[2611]: E0213 20:11:02.538223 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.539121 kubelet[2611]: E0213 20:11:02.539070 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.539121 kubelet[2611]: W0213 20:11:02.539111 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.539294 kubelet[2611]: E0213 20:11:02.539133 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.539511 kubelet[2611]: E0213 20:11:02.539481 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.539511 kubelet[2611]: W0213 20:11:02.539508 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.539699 kubelet[2611]: E0213 20:11:02.539527 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.539911 kubelet[2611]: E0213 20:11:02.539853 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.539911 kubelet[2611]: W0213 20:11:02.539876 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.539911 kubelet[2611]: E0213 20:11:02.539894 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.540948 kubelet[2611]: E0213 20:11:02.540846 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.540948 kubelet[2611]: W0213 20:11:02.540867 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.540948 kubelet[2611]: E0213 20:11:02.540886 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.642343 kubelet[2611]: E0213 20:11:02.642139 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.642343 kubelet[2611]: W0213 20:11:02.642172 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.642343 kubelet[2611]: E0213 20:11:02.642202 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.643155 kubelet[2611]: E0213 20:11:02.642612 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.643155 kubelet[2611]: W0213 20:11:02.642644 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.643155 kubelet[2611]: E0213 20:11:02.642663 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.644414 kubelet[2611]: E0213 20:11:02.643844 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.644414 kubelet[2611]: W0213 20:11:02.643939 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.644414 kubelet[2611]: E0213 20:11:02.643960 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.645046 kubelet[2611]: E0213 20:11:02.644811 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.645046 kubelet[2611]: W0213 20:11:02.644830 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.645046 kubelet[2611]: E0213 20:11:02.644848 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.645735 kubelet[2611]: E0213 20:11:02.645705 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.645816 kubelet[2611]: W0213 20:11:02.645771 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.645816 kubelet[2611]: E0213 20:11:02.645794 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.646231 kubelet[2611]: E0213 20:11:02.646208 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.646231 kubelet[2611]: W0213 20:11:02.646228 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.646363 kubelet[2611]: E0213 20:11:02.646247 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.696071 kubelet[2611]: E0213 20:11:02.695795 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.696071 kubelet[2611]: W0213 20:11:02.695822 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.696071 kubelet[2611]: E0213 20:11:02.695851 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.701400 kubelet[2611]: E0213 20:11:02.701361 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.701586 kubelet[2611]: W0213 20:11:02.701415 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.701586 kubelet[2611]: E0213 20:11:02.701443 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.747236 kubelet[2611]: E0213 20:11:02.747193 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.747236 kubelet[2611]: W0213 20:11:02.747228 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.747562 kubelet[2611]: E0213 20:11:02.747260 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.747688 kubelet[2611]: E0213 20:11:02.747650 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.747688 kubelet[2611]: W0213 20:11:02.747665 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.747688 kubelet[2611]: E0213 20:11:02.747685 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.748058 kubelet[2611]: E0213 20:11:02.748039 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.748058 kubelet[2611]: W0213 20:11:02.748056 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.748269 kubelet[2611]: E0213 20:11:02.748076 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.748505 kubelet[2611]: E0213 20:11:02.748477 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.748505 kubelet[2611]: W0213 20:11:02.748498 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.748658 kubelet[2611]: E0213 20:11:02.748517 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.849845 kubelet[2611]: E0213 20:11:02.849607 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.849845 kubelet[2611]: W0213 20:11:02.849641 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.849845 kubelet[2611]: E0213 20:11:02.849670 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.850872 kubelet[2611]: E0213 20:11:02.850523 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.850872 kubelet[2611]: W0213 20:11:02.850546 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.850872 kubelet[2611]: E0213 20:11:02.850568 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.851582 kubelet[2611]: E0213 20:11:02.851013 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.851582 kubelet[2611]: W0213 20:11:02.851030 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.851582 kubelet[2611]: E0213 20:11:02.851051 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.851582 kubelet[2611]: E0213 20:11:02.851466 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.851582 kubelet[2611]: W0213 20:11:02.851482 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.851935 kubelet[2611]: E0213 20:11:02.851622 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.953507 kubelet[2611]: E0213 20:11:02.953189 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.953507 kubelet[2611]: W0213 20:11:02.953220 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.953507 kubelet[2611]: E0213 20:11:02.953250 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.955717 kubelet[2611]: E0213 20:11:02.955241 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.955717 kubelet[2611]: W0213 20:11:02.955263 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.955717 kubelet[2611]: E0213 20:11:02.955290 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.956619 kubelet[2611]: E0213 20:11:02.956283 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.956619 kubelet[2611]: W0213 20:11:02.956338 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.956619 kubelet[2611]: E0213 20:11:02.956363 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:02.958637 kubelet[2611]: E0213 20:11:02.958430 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:02.958637 kubelet[2611]: W0213 20:11:02.958454 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:02.958637 kubelet[2611]: E0213 20:11:02.958479 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.004526 kubelet[2611]: E0213 20:11:03.004052 2611 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 20:11:03.004526 kubelet[2611]: E0213 20:11:03.004198 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-typha-certs podName:6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e nodeName:}" failed. No retries permitted until 2025-02-13 20:11:03.504169724 +0000 UTC m=+22.309853364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-typha-certs") pod "calico-typha-7795f9f674-fp5bd" (UID: "6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e") : failed to sync secret cache: timed out waiting for the condition Feb 13 20:11:03.015124 kubelet[2611]: E0213 20:11:03.013180 2611 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:11:03.015124 kubelet[2611]: E0213 20:11:03.013220 2611 projected.go:200] Error preparing data for projected volume kube-api-access-znwr4 for pod calico-system/calico-typha-7795f9f674-fp5bd: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:11:03.015124 kubelet[2611]: E0213 20:11:03.013306 2611 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-kube-api-access-znwr4 podName:6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e nodeName:}" failed. No retries permitted until 2025-02-13 20:11:03.513279211 +0000 UTC m=+22.318962851 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-znwr4" (UniqueName: "kubernetes.io/projected/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-kube-api-access-znwr4") pod "calico-typha-7795f9f674-fp5bd" (UID: "6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:11:03.060662 kubelet[2611]: E0213 20:11:03.060248 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.060662 kubelet[2611]: W0213 20:11:03.060279 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.060662 kubelet[2611]: E0213 20:11:03.060308 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.061360 kubelet[2611]: E0213 20:11:03.061052 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.061360 kubelet[2611]: W0213 20:11:03.061075 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.061360 kubelet[2611]: E0213 20:11:03.061128 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.061990 kubelet[2611]: E0213 20:11:03.061664 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.061990 kubelet[2611]: W0213 20:11:03.061680 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.061990 kubelet[2611]: E0213 20:11:03.061699 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.062652 kubelet[2611]: E0213 20:11:03.062570 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.062652 kubelet[2611]: W0213 20:11:03.062589 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.062652 kubelet[2611]: E0213 20:11:03.062609 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.074552 kubelet[2611]: E0213 20:11:03.074191 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.074552 kubelet[2611]: W0213 20:11:03.074221 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.074552 kubelet[2611]: E0213 20:11:03.074250 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.087723 kubelet[2611]: E0213 20:11:03.087633 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.087723 kubelet[2611]: W0213 20:11:03.087674 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.087723 kubelet[2611]: E0213 20:11:03.087718 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.158381 containerd[1476]: time="2025-02-13T20:11:03.158311984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4khm,Uid:7e03ae09-2522-4225-9b0e-f0ca6b4697b9,Namespace:calico-system,Attempt:0,}" Feb 13 20:11:03.164155 kubelet[2611]: E0213 20:11:03.164116 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.164155 kubelet[2611]: W0213 20:11:03.164149 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.164480 kubelet[2611]: E0213 20:11:03.164178 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.164924 kubelet[2611]: E0213 20:11:03.164897 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.164924 kubelet[2611]: W0213 20:11:03.164921 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.165133 kubelet[2611]: E0213 20:11:03.164942 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.203261 containerd[1476]: time="2025-02-13T20:11:03.202660978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:11:03.203261 containerd[1476]: time="2025-02-13T20:11:03.202777084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:11:03.203261 containerd[1476]: time="2025-02-13T20:11:03.202808122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:03.204436 containerd[1476]: time="2025-02-13T20:11:03.202960218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:03.250413 systemd[1]: run-containerd-runc-k8s.io-a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2-runc.u9TDOX.mount: Deactivated successfully. Feb 13 20:11:03.260346 systemd[1]: Started cri-containerd-a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2.scope - libcontainer container a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2. Feb 13 20:11:03.267197 kubelet[2611]: E0213 20:11:03.266773 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.267197 kubelet[2611]: W0213 20:11:03.266833 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.267197 kubelet[2611]: E0213 20:11:03.266872 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.269116 kubelet[2611]: E0213 20:11:03.267674 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.269116 kubelet[2611]: W0213 20:11:03.267734 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.269116 kubelet[2611]: E0213 20:11:03.267760 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.296356 containerd[1476]: time="2025-02-13T20:11:03.296241291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4khm,Uid:7e03ae09-2522-4225-9b0e-f0ca6b4697b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\"" Feb 13 20:11:03.300168 containerd[1476]: time="2025-02-13T20:11:03.299857539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:11:03.369110 kubelet[2611]: E0213 20:11:03.369057 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.369110 kubelet[2611]: W0213 20:11:03.369117 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.369408 kubelet[2611]: E0213 20:11:03.369148 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.369586 kubelet[2611]: E0213 20:11:03.369562 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.369586 kubelet[2611]: W0213 20:11:03.369583 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.369717 kubelet[2611]: E0213 20:11:03.369603 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.471405 kubelet[2611]: E0213 20:11:03.471265 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.471405 kubelet[2611]: W0213 20:11:03.471297 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.471405 kubelet[2611]: E0213 20:11:03.471327 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.471943 kubelet[2611]: E0213 20:11:03.471733 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.471943 kubelet[2611]: W0213 20:11:03.471749 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.471943 kubelet[2611]: E0213 20:11:03.471767 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.572692 kubelet[2611]: E0213 20:11:03.572635 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.572692 kubelet[2611]: W0213 20:11:03.572667 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.572692 kubelet[2611]: E0213 20:11:03.572697 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.573652 kubelet[2611]: E0213 20:11:03.573269 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.573652 kubelet[2611]: W0213 20:11:03.573288 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.573652 kubelet[2611]: E0213 20:11:03.573329 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.574078 kubelet[2611]: E0213 20:11:03.573843 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.574078 kubelet[2611]: W0213 20:11:03.573859 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.574078 kubelet[2611]: E0213 20:11:03.573887 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.574430 kubelet[2611]: E0213 20:11:03.574256 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.574430 kubelet[2611]: W0213 20:11:03.574271 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.574430 kubelet[2611]: E0213 20:11:03.574299 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.574707 kubelet[2611]: E0213 20:11:03.574602 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.574707 kubelet[2611]: W0213 20:11:03.574616 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.574707 kubelet[2611]: E0213 20:11:03.574650 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.575043 kubelet[2611]: E0213 20:11:03.575023 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.575043 kubelet[2611]: W0213 20:11:03.575042 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.575223 kubelet[2611]: E0213 20:11:03.575125 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.575742 kubelet[2611]: E0213 20:11:03.575719 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.576032 kubelet[2611]: W0213 20:11:03.575879 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.576032 kubelet[2611]: E0213 20:11:03.575908 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.576544 kubelet[2611]: E0213 20:11:03.576446 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.576544 kubelet[2611]: W0213 20:11:03.576467 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.576881 kubelet[2611]: E0213 20:11:03.576488 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.577339 kubelet[2611]: E0213 20:11:03.577239 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.577339 kubelet[2611]: W0213 20:11:03.577261 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.577339 kubelet[2611]: E0213 20:11:03.577284 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.578445 kubelet[2611]: E0213 20:11:03.577923 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.578445 kubelet[2611]: W0213 20:11:03.577944 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.578445 kubelet[2611]: E0213 20:11:03.577968 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.583961 kubelet[2611]: E0213 20:11:03.580676 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.583961 kubelet[2611]: W0213 20:11:03.580701 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.583961 kubelet[2611]: E0213 20:11:03.580721 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.585000 kubelet[2611]: E0213 20:11:03.584948 2611 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:11:03.585000 kubelet[2611]: W0213 20:11:03.584978 2611 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:11:03.585318 kubelet[2611]: E0213 20:11:03.585003 2611 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:11:03.610196 containerd[1476]: time="2025-02-13T20:11:03.609396742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7795f9f674-fp5bd,Uid:6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e,Namespace:calico-system,Attempt:0,}" Feb 13 20:11:03.645780 containerd[1476]: time="2025-02-13T20:11:03.645618961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:11:03.645780 containerd[1476]: time="2025-02-13T20:11:03.645726296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:11:03.646181 containerd[1476]: time="2025-02-13T20:11:03.645768097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:03.646181 containerd[1476]: time="2025-02-13T20:11:03.645960855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:03.678512 systemd[1]: Started cri-containerd-e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a.scope - libcontainer container e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a. Feb 13 20:11:03.745507 containerd[1476]: time="2025-02-13T20:11:03.745232308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7795f9f674-fp5bd,Uid:6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\"" Feb 13 20:11:04.413046 kubelet[2611]: E0213 20:11:04.410202 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnc68" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" Feb 13 20:11:04.576412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590794850.mount: Deactivated successfully. Feb 13 20:11:04.759017 containerd[1476]: time="2025-02-13T20:11:04.758836394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:04.760750 containerd[1476]: time="2025-02-13T20:11:04.760672450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 20:11:04.762273 containerd[1476]: time="2025-02-13T20:11:04.762197861Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:04.765629 containerd[1476]: time="2025-02-13T20:11:04.765555029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:04.766951 containerd[1476]: time="2025-02-13T20:11:04.766770806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.466860143s" Feb 13 20:11:04.766951 containerd[1476]: time="2025-02-13T20:11:04.766824405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:11:04.769254 containerd[1476]: time="2025-02-13T20:11:04.769221762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:11:04.771678 containerd[1476]: time="2025-02-13T20:11:04.771371769Z" level=info msg="CreateContainer within sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:11:04.796259 containerd[1476]: time="2025-02-13T20:11:04.796198958Z" level=info msg="CreateContainer within sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326\"" Feb 13 20:11:04.796996 containerd[1476]: time="2025-02-13T20:11:04.796953309Z" level=info msg="StartContainer for \"507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326\"" Feb 13 20:11:04.854623 systemd[1]: Started cri-containerd-507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326.scope - libcontainer container 507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326. Feb 13 20:11:04.907777 containerd[1476]: time="2025-02-13T20:11:04.907713834Z" level=info msg="StartContainer for \"507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326\" returns successfully" Feb 13 20:11:04.928242 systemd[1]: cri-containerd-507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326.scope: Deactivated successfully. Feb 13 20:11:05.175181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326-rootfs.mount: Deactivated successfully. Feb 13 20:11:05.284597 containerd[1476]: time="2025-02-13T20:11:05.284227819Z" level=info msg="shim disconnected" id=507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326 namespace=k8s.io Feb 13 20:11:05.284597 containerd[1476]: time="2025-02-13T20:11:05.284314258Z" level=warning msg="cleaning up after shim disconnected" id=507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326 namespace=k8s.io Feb 13 20:11:05.284597 containerd[1476]: time="2025-02-13T20:11:05.284330358Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:11:06.408721 kubelet[2611]: E0213 20:11:06.408497 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnc68" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" Feb 13 20:11:07.125114 containerd[1476]: time="2025-02-13T20:11:07.125038923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:07.126666 containerd[1476]: time="2025-02-13T20:11:07.126586145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 20:11:07.128044 containerd[1476]: time="2025-02-13T20:11:07.127975076Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:07.131784 containerd[1476]: time="2025-02-13T20:11:07.131712171Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:07.133168 containerd[1476]: time="2025-02-13T20:11:07.132999040Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.363577098s" Feb 13 20:11:07.133168 containerd[1476]: time="2025-02-13T20:11:07.133045467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:11:07.138574 containerd[1476]: time="2025-02-13T20:11:07.135927793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:11:07.158425 containerd[1476]: time="2025-02-13T20:11:07.158374658Z" level=info msg="CreateContainer within sandbox \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:11:07.182331 containerd[1476]: time="2025-02-13T20:11:07.182281732Z" level=info msg="CreateContainer within sandbox \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\"" Feb 13 20:11:07.185082 containerd[1476]: time="2025-02-13T20:11:07.185021212Z" level=info msg="StartContainer for \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\"" Feb 13 20:11:07.246353 systemd[1]: Started cri-containerd-319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43.scope - libcontainer container 319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43. Feb 13 20:11:07.319545 containerd[1476]: time="2025-02-13T20:11:07.319379661Z" level=info msg="StartContainer for \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\" returns successfully" Feb 13 20:11:07.550929 kubelet[2611]: I0213 20:11:07.550810 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7795f9f674-fp5bd" podStartSLOduration=3.164749624 podStartE2EDuration="6.550761133s" podCreationTimestamp="2025-02-13 20:11:01 +0000 UTC" firstStartedPulling="2025-02-13 20:11:03.748733758 +0000 UTC m=+22.554417377" lastFinishedPulling="2025-02-13 20:11:07.13474525 +0000 UTC m=+25.940428886" observedRunningTime="2025-02-13 20:11:07.547108236 +0000 UTC m=+26.352791874" watchObservedRunningTime="2025-02-13 20:11:07.550761133 +0000 UTC m=+26.356444777" Feb 13 20:11:08.410128 kubelet[2611]: E0213 20:11:08.407791 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnc68" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" Feb 13 20:11:08.540679 kubelet[2611]: I0213 20:11:08.540625 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:11:10.409521 kubelet[2611]: E0213 20:11:10.409301 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnc68" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" Feb 13 20:11:11.707116 containerd[1476]: time="2025-02-13T20:11:11.707020213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:11.708601 containerd[1476]: time="2025-02-13T20:11:11.708515475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:11:11.710362 containerd[1476]: time="2025-02-13T20:11:11.710225028Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:11.714179 containerd[1476]: time="2025-02-13T20:11:11.714062232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:11.715666 containerd[1476]: time="2025-02-13T20:11:11.715131066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.576871983s" Feb 13 20:11:11.715666 containerd[1476]: time="2025-02-13T20:11:11.715181267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:11:11.718860 containerd[1476]: time="2025-02-13T20:11:11.718801970Z" level=info msg="CreateContainer within sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:11:11.741995 containerd[1476]: time="2025-02-13T20:11:11.741923922Z" level=info msg="CreateContainer within sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730\"" Feb 13 20:11:11.742757 containerd[1476]: time="2025-02-13T20:11:11.742614379Z" level=info msg="StartContainer for \"83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730\"" Feb 13 20:11:11.798370 systemd[1]: Started cri-containerd-83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730.scope - libcontainer container 83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730. Feb 13 20:11:11.845180 containerd[1476]: time="2025-02-13T20:11:11.845015912Z" level=info msg="StartContainer for \"83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730\" returns successfully" Feb 13 20:11:12.407943 kubelet[2611]: E0213 20:11:12.407885 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnc68" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" Feb 13 20:11:12.791231 containerd[1476]: time="2025-02-13T20:11:12.791025141Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:11:12.794644 systemd[1]: cri-containerd-83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730.scope: Deactivated successfully. Feb 13 20:11:12.836692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730-rootfs.mount: Deactivated successfully. Feb 13 20:11:12.891575 kubelet[2611]: I0213 20:11:12.891535 2611 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 20:11:12.935213 kubelet[2611]: I0213 20:11:12.932468 2611 topology_manager.go:215] "Topology Admit Handler" podUID="2971b30d-28b2-4542-90ae-cd3031359da7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9nqgc" Feb 13 20:11:12.937259 kubelet[2611]: I0213 20:11:12.937191 2611 topology_manager.go:215] "Topology Admit Handler" podUID="30ffed34-3dac-4f10-aa92-96879d7c81be" podNamespace="calico-apiserver" podName="calico-apiserver-657bdbf897-8br7r" Feb 13 20:11:12.945295 kubelet[2611]: I0213 20:11:12.944502 2611 topology_manager.go:215] "Topology Admit Handler" podUID="4d678112-f176-4f7e-ac90-2c076ea6a206" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hv8vv" Feb 13 20:11:12.951706 kubelet[2611]: I0213 20:11:12.951665 2611 topology_manager.go:215] "Topology Admit Handler" podUID="d9574409-d923-4274-a785-828313eee44c" podNamespace="calico-system" podName="calico-kube-controllers-84fb4d47d4-ffz4k" Feb 13 20:11:12.952781 kubelet[2611]: I0213 20:11:12.952749 2611 topology_manager.go:215] "Topology Admit Handler" podUID="790c11b2-49c7-42d3-9545-23dd53e0fde9" podNamespace="calico-apiserver" podName="calico-apiserver-657bdbf897-6tf2l" Feb 13 20:11:12.962849 systemd[1]: Created slice kubepods-burstable-pod2971b30d_28b2_4542_90ae_cd3031359da7.slice - libcontainer container kubepods-burstable-pod2971b30d_28b2_4542_90ae_cd3031359da7.slice. Feb 13 20:11:12.979601 systemd[1]: Created slice kubepods-besteffort-pod30ffed34_3dac_4f10_aa92_96879d7c81be.slice - libcontainer container kubepods-besteffort-pod30ffed34_3dac_4f10_aa92_96879d7c81be.slice. Feb 13 20:11:12.994841 systemd[1]: Created slice kubepods-burstable-pod4d678112_f176_4f7e_ac90_2c076ea6a206.slice - libcontainer container kubepods-burstable-pod4d678112_f176_4f7e_ac90_2c076ea6a206.slice. Feb 13 20:11:13.011501 systemd[1]: Created slice kubepods-besteffort-podd9574409_d923_4274_a785_828313eee44c.slice - libcontainer container kubepods-besteffort-podd9574409_d923_4274_a785_828313eee44c.slice. Feb 13 20:11:13.022213 systemd[1]: Created slice kubepods-besteffort-pod790c11b2_49c7_42d3_9545_23dd53e0fde9.slice - libcontainer container kubepods-besteffort-pod790c11b2_49c7_42d3_9545_23dd53e0fde9.slice. Feb 13 20:11:13.044862 kubelet[2611]: I0213 20:11:13.044739 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2971b30d-28b2-4542-90ae-cd3031359da7-config-volume\") pod \"coredns-7db6d8ff4d-9nqgc\" (UID: \"2971b30d-28b2-4542-90ae-cd3031359da7\") " pod="kube-system/coredns-7db6d8ff4d-9nqgc" Feb 13 20:11:13.045616 kubelet[2611]: I0213 20:11:13.044971 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9574409-d923-4274-a785-828313eee44c-tigera-ca-bundle\") pod \"calico-kube-controllers-84fb4d47d4-ffz4k\" (UID: \"d9574409-d923-4274-a785-828313eee44c\") " pod="calico-system/calico-kube-controllers-84fb4d47d4-ffz4k" Feb 13 20:11:13.045616 kubelet[2611]: I0213 20:11:13.045234 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd2g7\" (UniqueName: \"kubernetes.io/projected/790c11b2-49c7-42d3-9545-23dd53e0fde9-kube-api-access-zd2g7\") pod \"calico-apiserver-657bdbf897-6tf2l\" (UID: \"790c11b2-49c7-42d3-9545-23dd53e0fde9\") " pod="calico-apiserver/calico-apiserver-657bdbf897-6tf2l" Feb 13 20:11:13.045616 kubelet[2611]: I0213 20:11:13.045484 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fqf\" (UniqueName: \"kubernetes.io/projected/2971b30d-28b2-4542-90ae-cd3031359da7-kube-api-access-87fqf\") pod \"coredns-7db6d8ff4d-9nqgc\" (UID: \"2971b30d-28b2-4542-90ae-cd3031359da7\") " pod="kube-system/coredns-7db6d8ff4d-9nqgc" Feb 13 20:11:13.045616 kubelet[2611]: I0213 20:11:13.045521 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9j96\" (UniqueName: \"kubernetes.io/projected/30ffed34-3dac-4f10-aa92-96879d7c81be-kube-api-access-g9j96\") pod \"calico-apiserver-657bdbf897-8br7r\" (UID: \"30ffed34-3dac-4f10-aa92-96879d7c81be\") " pod="calico-apiserver/calico-apiserver-657bdbf897-8br7r" Feb 13 20:11:13.046262 kubelet[2611]: I0213 20:11:13.045739 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgdc\" (UniqueName: \"kubernetes.io/projected/4d678112-f176-4f7e-ac90-2c076ea6a206-kube-api-access-5jgdc\") pod \"coredns-7db6d8ff4d-hv8vv\" (UID: \"4d678112-f176-4f7e-ac90-2c076ea6a206\") " pod="kube-system/coredns-7db6d8ff4d-hv8vv" Feb 13 20:11:13.046262 kubelet[2611]: I0213 20:11:13.045774 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d678112-f176-4f7e-ac90-2c076ea6a206-config-volume\") pod \"coredns-7db6d8ff4d-hv8vv\" (UID: \"4d678112-f176-4f7e-ac90-2c076ea6a206\") " pod="kube-system/coredns-7db6d8ff4d-hv8vv" Feb 13 20:11:13.046262 kubelet[2611]: I0213 20:11:13.045809 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzb2\" (UniqueName: \"kubernetes.io/projected/d9574409-d923-4274-a785-828313eee44c-kube-api-access-gtzb2\") pod \"calico-kube-controllers-84fb4d47d4-ffz4k\" (UID: \"d9574409-d923-4274-a785-828313eee44c\") " pod="calico-system/calico-kube-controllers-84fb4d47d4-ffz4k" Feb 13 20:11:13.046262 kubelet[2611]: I0213 20:11:13.045842 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/30ffed34-3dac-4f10-aa92-96879d7c81be-calico-apiserver-certs\") pod \"calico-apiserver-657bdbf897-8br7r\" (UID: \"30ffed34-3dac-4f10-aa92-96879d7c81be\") " pod="calico-apiserver/calico-apiserver-657bdbf897-8br7r" Feb 13 20:11:13.046262 kubelet[2611]: I0213 20:11:13.045875 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/790c11b2-49c7-42d3-9545-23dd53e0fde9-calico-apiserver-certs\") pod \"calico-apiserver-657bdbf897-6tf2l\" (UID: \"790c11b2-49c7-42d3-9545-23dd53e0fde9\") " pod="calico-apiserver/calico-apiserver-657bdbf897-6tf2l" Feb 13 20:11:13.336010 containerd[1476]: time="2025-02-13T20:11:13.335581266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657bdbf897-8br7r,Uid:30ffed34-3dac-4f10-aa92-96879d7c81be,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:11:13.337144 containerd[1476]: time="2025-02-13T20:11:13.337060677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hv8vv,Uid:4d678112-f176-4f7e-ac90-2c076ea6a206,Namespace:kube-system,Attempt:0,}" Feb 13 20:11:13.337715 containerd[1476]: time="2025-02-13T20:11:13.337354610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657bdbf897-6tf2l,Uid:790c11b2-49c7-42d3-9545-23dd53e0fde9,Namespace:calico-apiserver,Attempt:0,}" Feb 13 20:11:13.337715 containerd[1476]: time="2025-02-13T20:11:13.337487342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fb4d47d4-ffz4k,Uid:d9574409-d923-4274-a785-828313eee44c,Namespace:calico-system,Attempt:0,}" Feb 13 20:11:13.337715 containerd[1476]: time="2025-02-13T20:11:13.337064028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nqgc,Uid:2971b30d-28b2-4542-90ae-cd3031359da7,Namespace:kube-system,Attempt:0,}" Feb 13 20:11:13.873723 containerd[1476]: time="2025-02-13T20:11:13.873633639Z" level=info msg="shim disconnected" id=83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730 namespace=k8s.io Feb 13 20:11:13.874581 containerd[1476]: time="2025-02-13T20:11:13.873861210Z" level=warning msg="cleaning up after shim disconnected" id=83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730 namespace=k8s.io Feb 13 20:11:13.874581 containerd[1476]: time="2025-02-13T20:11:13.873889349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:11:13.898165 containerd[1476]: time="2025-02-13T20:11:13.898055949Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:11:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:11:14.202546 containerd[1476]: time="2025-02-13T20:11:14.202218033Z" level=error msg="Failed to destroy network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.209026 containerd[1476]: time="2025-02-13T20:11:14.208137635Z" level=error msg="encountered an error cleaning up failed sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.209867 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399-shm.mount: Deactivated successfully. Feb 13 20:11:14.213406 containerd[1476]: time="2025-02-13T20:11:14.210271420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fb4d47d4-ffz4k,Uid:d9574409-d923-4274-a785-828313eee44c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.214810 kubelet[2611]: E0213 20:11:14.214047 2611 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.214810 kubelet[2611]: E0213 20:11:14.214157 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fb4d47d4-ffz4k" Feb 13 20:11:14.214810 kubelet[2611]: E0213 20:11:14.214195 2611 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fb4d47d4-ffz4k" Feb 13 20:11:14.215543 kubelet[2611]: E0213 20:11:14.214260 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84fb4d47d4-ffz4k_calico-system(d9574409-d923-4274-a785-828313eee44c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84fb4d47d4-ffz4k_calico-system(d9574409-d923-4274-a785-828313eee44c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84fb4d47d4-ffz4k" podUID="d9574409-d923-4274-a785-828313eee44c" Feb 13 20:11:14.247918 containerd[1476]: time="2025-02-13T20:11:14.246333542Z" level=error msg="Failed to destroy network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.247918 containerd[1476]: time="2025-02-13T20:11:14.246872165Z" level=error msg="encountered an error cleaning up failed sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.247918 containerd[1476]: time="2025-02-13T20:11:14.247022002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657bdbf897-8br7r,Uid:30ffed34-3dac-4f10-aa92-96879d7c81be,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.248306 kubelet[2611]: E0213 20:11:14.247366 2611 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.248306 kubelet[2611]: E0213 20:11:14.247451 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-657bdbf897-8br7r" Feb 13 20:11:14.248306 kubelet[2611]: E0213 20:11:14.247484 2611 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-657bdbf897-8br7r" Feb 13 20:11:14.248503 kubelet[2611]: E0213 20:11:14.247548 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-657bdbf897-8br7r_calico-apiserver(30ffed34-3dac-4f10-aa92-96879d7c81be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-657bdbf897-8br7r_calico-apiserver(30ffed34-3dac-4f10-aa92-96879d7c81be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-657bdbf897-8br7r" podUID="30ffed34-3dac-4f10-aa92-96879d7c81be" Feb 13 20:11:14.254658 containerd[1476]: time="2025-02-13T20:11:14.254273489Z" level=error msg="Failed to destroy network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.255340 containerd[1476]: time="2025-02-13T20:11:14.255274815Z" level=error msg="encountered an error cleaning up failed sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.255618 containerd[1476]: time="2025-02-13T20:11:14.255529373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hv8vv,Uid:4d678112-f176-4f7e-ac90-2c076ea6a206,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.256274 kubelet[2611]: E0213 20:11:14.256174 2611 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.256728 kubelet[2611]: E0213 20:11:14.256611 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hv8vv" Feb 13 20:11:14.256728 kubelet[2611]: E0213 20:11:14.256670 2611 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hv8vv" Feb 13 20:11:14.257148 kubelet[2611]: E0213 20:11:14.256932 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hv8vv_kube-system(4d678112-f176-4f7e-ac90-2c076ea6a206)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hv8vv_kube-system(4d678112-f176-4f7e-ac90-2c076ea6a206)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hv8vv" podUID="4d678112-f176-4f7e-ac90-2c076ea6a206" Feb 13 20:11:14.271307 containerd[1476]: time="2025-02-13T20:11:14.271216644Z" level=error msg="Failed to destroy network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.272373 containerd[1476]: time="2025-02-13T20:11:14.272128423Z" level=error msg="encountered an error cleaning up failed sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.272373 containerd[1476]: time="2025-02-13T20:11:14.272215333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nqgc,Uid:2971b30d-28b2-4542-90ae-cd3031359da7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.273201 containerd[1476]: time="2025-02-13T20:11:14.272879478Z" level=error msg="Failed to destroy network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.273300 kubelet[2611]: E0213 20:11:14.272901 2611 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.273300 kubelet[2611]: E0213 20:11:14.273026 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9nqgc" Feb 13 20:11:14.273300 kubelet[2611]: E0213 20:11:14.273063 2611 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9nqgc" Feb 13 20:11:14.274220 kubelet[2611]: E0213 20:11:14.273946 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9nqgc_kube-system(2971b30d-28b2-4542-90ae-cd3031359da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9nqgc_kube-system(2971b30d-28b2-4542-90ae-cd3031359da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9nqgc" podUID="2971b30d-28b2-4542-90ae-cd3031359da7" Feb 13 20:11:14.274341 containerd[1476]: time="2025-02-13T20:11:14.273691357Z" level=error msg="encountered an error cleaning up failed sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.274341 containerd[1476]: time="2025-02-13T20:11:14.273758494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657bdbf897-6tf2l,Uid:790c11b2-49c7-42d3-9545-23dd53e0fde9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.274671 kubelet[2611]: E0213 20:11:14.274636 2611 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.275047 kubelet[2611]: E0213 20:11:14.274854 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-657bdbf897-6tf2l" Feb 13 20:11:14.275047 kubelet[2611]: E0213 20:11:14.274915 2611 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-657bdbf897-6tf2l" Feb 13 20:11:14.275047 kubelet[2611]: E0213 20:11:14.275000 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-657bdbf897-6tf2l_calico-apiserver(790c11b2-49c7-42d3-9545-23dd53e0fde9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-657bdbf897-6tf2l_calico-apiserver(790c11b2-49c7-42d3-9545-23dd53e0fde9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-657bdbf897-6tf2l" podUID="790c11b2-49c7-42d3-9545-23dd53e0fde9" Feb 13 20:11:14.415939 systemd[1]: Created slice kubepods-besteffort-pod00ae0c73_92db_4a9c_a76b_7c749f976739.slice - libcontainer container kubepods-besteffort-pod00ae0c73_92db_4a9c_a76b_7c749f976739.slice. Feb 13 20:11:14.420006 containerd[1476]: time="2025-02-13T20:11:14.419953607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnc68,Uid:00ae0c73-92db-4a9c-a76b-7c749f976739,Namespace:calico-system,Attempt:0,}" Feb 13 20:11:14.502932 containerd[1476]: time="2025-02-13T20:11:14.502047776Z" level=error msg="Failed to destroy network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.502932 containerd[1476]: time="2025-02-13T20:11:14.502632525Z" level=error msg="encountered an error cleaning up failed sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.502932 containerd[1476]: time="2025-02-13T20:11:14.502718855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnc68,Uid:00ae0c73-92db-4a9c-a76b-7c749f976739,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.505112 kubelet[2611]: E0213 20:11:14.503254 2611 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.505112 kubelet[2611]: E0213 20:11:14.503474 2611 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bnc68" Feb 13 20:11:14.505112 kubelet[2611]: E0213 20:11:14.503563 2611 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bnc68" Feb 13 20:11:14.505375 kubelet[2611]: E0213 20:11:14.503660 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bnc68_calico-system(00ae0c73-92db-4a9c-a76b-7c749f976739)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bnc68_calico-system(00ae0c73-92db-4a9c-a76b-7c749f976739)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bnc68" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" Feb 13 20:11:14.563144 kubelet[2611]: I0213 20:11:14.562508 2611 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:14.563673 containerd[1476]: time="2025-02-13T20:11:14.563613444Z" level=info msg="StopPodSandbox for \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\"" Feb 13 20:11:14.565632 containerd[1476]: time="2025-02-13T20:11:14.563881059Z" level=info msg="Ensure that sandbox c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec in task-service has been cleanup successfully" Feb 13 20:11:14.566610 kubelet[2611]: I0213 20:11:14.566120 2611 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:14.568331 containerd[1476]: time="2025-02-13T20:11:14.568293047Z" level=info msg="StopPodSandbox for \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\"" Feb 13 20:11:14.569004 containerd[1476]: time="2025-02-13T20:11:14.568809152Z" level=info msg="Ensure that sandbox 18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9 in task-service has been cleanup successfully" Feb 13 20:11:14.575053 kubelet[2611]: I0213 20:11:14.574600 2611 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:14.582126 containerd[1476]: time="2025-02-13T20:11:14.580857255Z" level=info msg="StopPodSandbox for \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\"" Feb 13 20:11:14.582126 containerd[1476]: time="2025-02-13T20:11:14.581160718Z" level=info msg="Ensure that sandbox 577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399 in task-service has been cleanup successfully" Feb 13 20:11:14.594560 containerd[1476]: time="2025-02-13T20:11:14.593516628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:11:14.598204 kubelet[2611]: I0213 20:11:14.598171 2611 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:14.605483 containerd[1476]: time="2025-02-13T20:11:14.604564742Z" level=info msg="StopPodSandbox for \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\"" Feb 13 20:11:14.605483 containerd[1476]: time="2025-02-13T20:11:14.604819661Z" level=info msg="Ensure that sandbox 6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34 in task-service has been cleanup successfully" Feb 13 20:11:14.613656 kubelet[2611]: I0213 20:11:14.613594 2611 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:14.632117 containerd[1476]: time="2025-02-13T20:11:14.631644747Z" level=info msg="StopPodSandbox for \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\"" Feb 13 20:11:14.636650 containerd[1476]: time="2025-02-13T20:11:14.635446969Z" level=info msg="Ensure that sandbox 67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9 in task-service has been cleanup successfully" Feb 13 20:11:14.646647 kubelet[2611]: I0213 20:11:14.646486 2611 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:14.667805 containerd[1476]: time="2025-02-13T20:11:14.667606354Z" level=info msg="StopPodSandbox for \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\"" Feb 13 20:11:14.673938 containerd[1476]: time="2025-02-13T20:11:14.672838197Z" level=info msg="Ensure that sandbox a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec in task-service has been cleanup successfully" Feb 13 20:11:14.736331 containerd[1476]: time="2025-02-13T20:11:14.736259133Z" level=error msg="StopPodSandbox for \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\" failed" error="failed to destroy network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.736690 kubelet[2611]: E0213 20:11:14.736632 2611 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:14.736799 kubelet[2611]: E0213 20:11:14.736715 2611 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9"} Feb 13 20:11:14.736865 kubelet[2611]: E0213 20:11:14.736801 2611 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2971b30d-28b2-4542-90ae-cd3031359da7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:11:14.736978 kubelet[2611]: E0213 20:11:14.736861 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2971b30d-28b2-4542-90ae-cd3031359da7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9nqgc" podUID="2971b30d-28b2-4542-90ae-cd3031359da7" Feb 13 20:11:14.737224 containerd[1476]: time="2025-02-13T20:11:14.737173739Z" level=error msg="StopPodSandbox for \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\" failed" error="failed to destroy network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.737502 kubelet[2611]: E0213 20:11:14.737460 2611 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:14.737602 kubelet[2611]: E0213 20:11:14.737517 2611 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec"} Feb 13 20:11:14.737602 kubelet[2611]: E0213 20:11:14.737565 2611 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30ffed34-3dac-4f10-aa92-96879d7c81be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:11:14.737759 kubelet[2611]: E0213 20:11:14.737598 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30ffed34-3dac-4f10-aa92-96879d7c81be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-657bdbf897-8br7r" podUID="30ffed34-3dac-4f10-aa92-96879d7c81be" Feb 13 20:11:14.758239 containerd[1476]: time="2025-02-13T20:11:14.757660365Z" level=error msg="StopPodSandbox for \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\" failed" error="failed to destroy network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.758421 kubelet[2611]: E0213 20:11:14.757958 2611 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:14.758421 kubelet[2611]: E0213 20:11:14.758046 2611 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399"} Feb 13 20:11:14.758421 kubelet[2611]: E0213 20:11:14.758218 2611 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d9574409-d923-4274-a785-828313eee44c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:11:14.758421 kubelet[2611]: E0213 20:11:14.758266 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d9574409-d923-4274-a785-828313eee44c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84fb4d47d4-ffz4k" podUID="d9574409-d923-4274-a785-828313eee44c" Feb 13 20:11:14.791714 containerd[1476]: time="2025-02-13T20:11:14.791576240Z" level=error msg="StopPodSandbox for \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\" failed" error="failed to destroy network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.792140 kubelet[2611]: E0213 20:11:14.791965 2611 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:14.792140 kubelet[2611]: E0213 20:11:14.792036 2611 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34"} Feb 13 20:11:14.792140 kubelet[2611]: E0213 20:11:14.792106 2611 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00ae0c73-92db-4a9c-a76b-7c749f976739\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:11:14.792424 kubelet[2611]: E0213 20:11:14.792163 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00ae0c73-92db-4a9c-a76b-7c749f976739\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bnc68" podUID="00ae0c73-92db-4a9c-a76b-7c749f976739" Feb 13 20:11:14.808135 kubelet[2611]: I0213 20:11:14.806631 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:11:14.811294 containerd[1476]: time="2025-02-13T20:11:14.811077083Z" level=error msg="StopPodSandbox for \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\" failed" error="failed to destroy network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.813703 kubelet[2611]: E0213 20:11:14.812856 2611 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:14.813703 kubelet[2611]: E0213 20:11:14.812920 2611 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec"} Feb 13 20:11:14.813703 kubelet[2611]: E0213 20:11:14.812975 2611 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d678112-f176-4f7e-ac90-2c076ea6a206\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:11:14.814462 kubelet[2611]: E0213 20:11:14.813015 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d678112-f176-4f7e-ac90-2c076ea6a206\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hv8vv" podUID="4d678112-f176-4f7e-ac90-2c076ea6a206" Feb 13 20:11:14.827836 containerd[1476]: time="2025-02-13T20:11:14.827759254Z" level=error msg="StopPodSandbox for \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\" failed" error="failed to destroy network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:11:14.829395 kubelet[2611]: E0213 20:11:14.829333 2611 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:14.829553 kubelet[2611]: E0213 20:11:14.829411 2611 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9"} Feb 13 20:11:14.829553 kubelet[2611]: E0213 20:11:14.829466 2611 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"790c11b2-49c7-42d3-9545-23dd53e0fde9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:11:14.829553 kubelet[2611]: E0213 20:11:14.829507 2611 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"790c11b2-49c7-42d3-9545-23dd53e0fde9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-657bdbf897-6tf2l" podUID="790c11b2-49c7-42d3-9545-23dd53e0fde9" Feb 13 20:11:14.840493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec-shm.mount: Deactivated successfully. Feb 13 20:11:14.841078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9-shm.mount: Deactivated successfully. Feb 13 20:11:14.841860 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec-shm.mount: Deactivated successfully. Feb 13 20:11:14.842285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9-shm.mount: Deactivated successfully. Feb 13 20:11:21.968576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695605961.mount: Deactivated successfully. Feb 13 20:11:22.018984 containerd[1476]: time="2025-02-13T20:11:22.018892277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:22.020400 containerd[1476]: time="2025-02-13T20:11:22.020325764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:11:22.022518 containerd[1476]: time="2025-02-13T20:11:22.022406739Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:22.026244 containerd[1476]: time="2025-02-13T20:11:22.026152214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:22.028193 containerd[1476]: time="2025-02-13T20:11:22.027158715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.433579302s" Feb 13 20:11:22.028193 containerd[1476]: time="2025-02-13T20:11:22.027209994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:11:22.053810 containerd[1476]: time="2025-02-13T20:11:22.053728482Z" level=info msg="CreateContainer within sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:11:22.089630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2348278680.mount: Deactivated successfully. Feb 13 20:11:22.091552 containerd[1476]: time="2025-02-13T20:11:22.091468347Z" level=info msg="CreateContainer within sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\"" Feb 13 20:11:22.092516 containerd[1476]: time="2025-02-13T20:11:22.092460857Z" level=info msg="StartContainer for \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\"" Feb 13 20:11:22.137375 systemd[1]: Started cri-containerd-0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3.scope - libcontainer container 0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3. Feb 13 20:11:22.192514 containerd[1476]: time="2025-02-13T20:11:22.192191558Z" level=info msg="StartContainer for \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\" returns successfully" Feb 13 20:11:22.311618 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:11:22.312377 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:11:22.728378 kubelet[2611]: I0213 20:11:22.727290 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g4khm" podStartSLOduration=2.998249723 podStartE2EDuration="21.727240649s" podCreationTimestamp="2025-02-13 20:11:01 +0000 UTC" firstStartedPulling="2025-02-13 20:11:03.29922867 +0000 UTC m=+22.104912290" lastFinishedPulling="2025-02-13 20:11:22.028219593 +0000 UTC m=+40.833903216" observedRunningTime="2025-02-13 20:11:22.720609928 +0000 UTC m=+41.526293599" watchObservedRunningTime="2025-02-13 20:11:22.727240649 +0000 UTC m=+41.532924293" Feb 13 20:11:24.269120 kernel: bpftool[3914]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:11:24.606741 systemd-networkd[1385]: vxlan.calico: Link UP Feb 13 20:11:24.606757 systemd-networkd[1385]: vxlan.calico: Gained carrier Feb 13 20:11:25.411103 containerd[1476]: time="2025-02-13T20:11:25.410710418Z" level=info msg="StopPodSandbox for \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\"" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.473 [INFO][4013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.473 [INFO][4013] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" iface="eth0" netns="/var/run/netns/cni-71fa8689-62ed-cfec-7ff1-56724858418e" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.477 [INFO][4013] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" iface="eth0" netns="/var/run/netns/cni-71fa8689-62ed-cfec-7ff1-56724858418e" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.477 [INFO][4013] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" iface="eth0" netns="/var/run/netns/cni-71fa8689-62ed-cfec-7ff1-56724858418e" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.477 [INFO][4013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.477 [INFO][4013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.508 [INFO][4019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.508 [INFO][4019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.508 [INFO][4019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.517 [WARNING][4019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.517 [INFO][4019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.519 [INFO][4019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:25.524356 containerd[1476]: 2025-02-13 20:11:25.522 [INFO][4013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:25.526357 containerd[1476]: time="2025-02-13T20:11:25.524509243Z" level=info msg="TearDown network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\" successfully" Feb 13 20:11:25.526357 containerd[1476]: time="2025-02-13T20:11:25.526208824Z" level=info msg="StopPodSandbox for \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\" returns successfully" Feb 13 20:11:25.529526 containerd[1476]: time="2025-02-13T20:11:25.527809123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657bdbf897-6tf2l,Uid:790c11b2-49c7-42d3-9545-23dd53e0fde9,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:11:25.532888 systemd[1]: run-netns-cni\x2d71fa8689\x2d62ed\x2dcfec\x2d7ff1\x2d56724858418e.mount: Deactivated successfully. Feb 13 20:11:25.702959 systemd-networkd[1385]: caliad77f1fd1fe: Link UP Feb 13 20:11:25.707197 systemd-networkd[1385]: caliad77f1fd1fe: Gained carrier Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.601 [INFO][4026] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0 calico-apiserver-657bdbf897- calico-apiserver 790c11b2-49c7-42d3-9545-23dd53e0fde9 824 0 2025-02-13 20:11:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:657bdbf897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal calico-apiserver-657bdbf897-6tf2l eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad77f1fd1fe [] []}} ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-6tf2l" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.601 [INFO][4026] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-6tf2l" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.642 [INFO][4036] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" HandleID="k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.655 [INFO][4036] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" HandleID="k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", "pod":"calico-apiserver-657bdbf897-6tf2l", "timestamp":"2025-02-13 20:11:25.642171992 +0000 UTC"}, Hostname:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.655 [INFO][4036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.655 [INFO][4036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.655 [INFO][4036] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.658 [INFO][4036] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.663 [INFO][4036] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.670 [INFO][4036] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.673 [INFO][4036] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.677 [INFO][4036] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.677 [INFO][4036] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.680 [INFO][4036] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62 Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.686 [INFO][4036] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.694 [INFO][4036] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.129/26] block=192.168.62.128/26 handle="k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.695 [INFO][4036] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.129/26] handle="k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.695 [INFO][4036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:25.739349 containerd[1476]: 2025-02-13 20:11:25.695 [INFO][4036] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.129/26] IPv6=[] ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" HandleID="k8s-pod-network.e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.745273 containerd[1476]: 2025-02-13 20:11:25.698 [INFO][4026] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-6tf2l" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0", GenerateName:"calico-apiserver-657bdbf897-", Namespace:"calico-apiserver", SelfLink:"", UID:"790c11b2-49c7-42d3-9545-23dd53e0fde9", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657bdbf897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-657bdbf897-6tf2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad77f1fd1fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:25.745273 containerd[1476]: 2025-02-13 20:11:25.698 [INFO][4026] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.129/32] ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-6tf2l" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.745273 containerd[1476]: 2025-02-13 20:11:25.698 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad77f1fd1fe ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-6tf2l" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.745273 containerd[1476]: 2025-02-13 20:11:25.706 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-6tf2l" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.745273 containerd[1476]: 2025-02-13 20:11:25.707 [INFO][4026] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-6tf2l" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0", GenerateName:"calico-apiserver-657bdbf897-", Namespace:"calico-apiserver", SelfLink:"", UID:"790c11b2-49c7-42d3-9545-23dd53e0fde9", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657bdbf897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62", Pod:"calico-apiserver-657bdbf897-6tf2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad77f1fd1fe", MAC:"66:d1:7b:5c:2e:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:25.745273 containerd[1476]: 2025-02-13 20:11:25.727 [INFO][4026] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-6tf2l" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:25.782390 containerd[1476]: time="2025-02-13T20:11:25.782210659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:11:25.782390 containerd[1476]: time="2025-02-13T20:11:25.782297978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:11:25.782390 containerd[1476]: time="2025-02-13T20:11:25.782318318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:25.782908 containerd[1476]: time="2025-02-13T20:11:25.782664574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:25.827366 systemd[1]: Started cri-containerd-e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62.scope - libcontainer container e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62. Feb 13 20:11:25.888328 containerd[1476]: time="2025-02-13T20:11:25.888168868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657bdbf897-6tf2l,Uid:790c11b2-49c7-42d3-9545-23dd53e0fde9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62\"" Feb 13 20:11:25.892240 containerd[1476]: time="2025-02-13T20:11:25.892146135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:11:26.409310 containerd[1476]: time="2025-02-13T20:11:26.408768003Z" level=info msg="StopPodSandbox for \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\"" Feb 13 20:11:26.409310 containerd[1476]: time="2025-02-13T20:11:26.408947377Z" level=info msg="StopPodSandbox for \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\"" Feb 13 20:11:26.442604 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.517 [INFO][4125] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.518 [INFO][4125] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" iface="eth0" netns="/var/run/netns/cni-6a573d3e-91f1-2d89-5a9d-717ffe21dc3f" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.520 [INFO][4125] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" iface="eth0" netns="/var/run/netns/cni-6a573d3e-91f1-2d89-5a9d-717ffe21dc3f" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.524 [INFO][4125] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" iface="eth0" netns="/var/run/netns/cni-6a573d3e-91f1-2d89-5a9d-717ffe21dc3f" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.524 [INFO][4125] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.524 [INFO][4125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.569 [INFO][4138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.570 [INFO][4138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.570 [INFO][4138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.580 [WARNING][4138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.580 [INFO][4138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.584 [INFO][4138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:26.589196 containerd[1476]: 2025-02-13 20:11:26.587 [INFO][4125] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:26.590609 containerd[1476]: time="2025-02-13T20:11:26.590262409Z" level=info msg="TearDown network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\" successfully" Feb 13 20:11:26.590609 containerd[1476]: time="2025-02-13T20:11:26.590312285Z" level=info msg="StopPodSandbox for \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\" returns successfully" Feb 13 20:11:26.598194 containerd[1476]: time="2025-02-13T20:11:26.597919208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nqgc,Uid:2971b30d-28b2-4542-90ae-cd3031359da7,Namespace:kube-system,Attempt:1,}" Feb 13 20:11:26.599535 systemd[1]: run-netns-cni\x2d6a573d3e\x2d91f1\x2d2d89\x2d5a9d\x2d717ffe21dc3f.mount: Deactivated successfully. Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.517 [INFO][4124] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.518 [INFO][4124] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" iface="eth0" netns="/var/run/netns/cni-2bce712e-39ce-1bbd-958a-8b069845fe2f" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.520 [INFO][4124] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" iface="eth0" netns="/var/run/netns/cni-2bce712e-39ce-1bbd-958a-8b069845fe2f" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.523 [INFO][4124] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" iface="eth0" netns="/var/run/netns/cni-2bce712e-39ce-1bbd-958a-8b069845fe2f" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.523 [INFO][4124] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.524 [INFO][4124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.574 [INFO][4139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.575 [INFO][4139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.583 [INFO][4139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.603 [WARNING][4139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.603 [INFO][4139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.607 [INFO][4139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:26.615177 containerd[1476]: 2025-02-13 20:11:26.609 [INFO][4124] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:26.615177 containerd[1476]: time="2025-02-13T20:11:26.613431524Z" level=info msg="TearDown network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\" successfully" Feb 13 20:11:26.615177 containerd[1476]: time="2025-02-13T20:11:26.613472571Z" level=info msg="StopPodSandbox for \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\" returns successfully" Feb 13 20:11:26.615177 containerd[1476]: time="2025-02-13T20:11:26.614569688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fb4d47d4-ffz4k,Uid:d9574409-d923-4274-a785-828313eee44c,Namespace:calico-system,Attempt:1,}" Feb 13 20:11:26.622400 systemd[1]: run-netns-cni\x2d2bce712e\x2d39ce\x2d1bbd\x2d958a\x2d8b069845fe2f.mount: Deactivated successfully. Feb 13 20:11:26.850276 systemd-networkd[1385]: califfb9fe776a6: Link UP Feb 13 20:11:26.853331 systemd-networkd[1385]: califfb9fe776a6: Gained carrier Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.729 [INFO][4151] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0 calico-kube-controllers-84fb4d47d4- calico-system d9574409-d923-4274-a785-828313eee44c 834 0 2025-02-13 20:11:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84fb4d47d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal calico-kube-controllers-84fb4d47d4-ffz4k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califfb9fe776a6 [] []}} ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Namespace="calico-system" Pod="calico-kube-controllers-84fb4d47d4-ffz4k" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.730 [INFO][4151] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Namespace="calico-system" Pod="calico-kube-controllers-84fb4d47d4-ffz4k" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.788 [INFO][4174] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.802 [INFO][4174] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", "pod":"calico-kube-controllers-84fb4d47d4-ffz4k", "timestamp":"2025-02-13 20:11:26.788478809 +0000 UTC"}, Hostname:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.803 [INFO][4174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.803 [INFO][4174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.803 [INFO][4174] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.805 [INFO][4174] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.810 [INFO][4174] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.816 [INFO][4174] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.821 [INFO][4174] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.823 [INFO][4174] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.823 [INFO][4174] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.825 [INFO][4174] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91 Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.830 [INFO][4174] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.838 [INFO][4174] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.130/26] block=192.168.62.128/26 handle="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.839 [INFO][4174] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.130/26] handle="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.839 [INFO][4174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:26.890368 containerd[1476]: 2025-02-13 20:11:26.839 [INFO][4174] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.130/26] IPv6=[] ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.892026 containerd[1476]: 2025-02-13 20:11:26.842 [INFO][4151] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Namespace="calico-system" Pod="calico-kube-controllers-84fb4d47d4-ffz4k" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0", GenerateName:"calico-kube-controllers-84fb4d47d4-", Namespace:"calico-system", SelfLink:"", UID:"d9574409-d923-4274-a785-828313eee44c", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fb4d47d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-84fb4d47d4-ffz4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfb9fe776a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:26.892026 containerd[1476]: 2025-02-13 20:11:26.842 [INFO][4151] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.130/32] ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Namespace="calico-system" Pod="calico-kube-controllers-84fb4d47d4-ffz4k" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.892026 containerd[1476]: 2025-02-13 20:11:26.842 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfb9fe776a6 ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Namespace="calico-system" Pod="calico-kube-controllers-84fb4d47d4-ffz4k" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.892026 containerd[1476]: 2025-02-13 20:11:26.853 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Namespace="calico-system" Pod="calico-kube-controllers-84fb4d47d4-ffz4k" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.892026 containerd[1476]: 2025-02-13 20:11:26.856 [INFO][4151] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Namespace="calico-system" Pod="calico-kube-controllers-84fb4d47d4-ffz4k" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0", GenerateName:"calico-kube-controllers-84fb4d47d4-", Namespace:"calico-system", SelfLink:"", UID:"d9574409-d923-4274-a785-828313eee44c", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fb4d47d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91", Pod:"calico-kube-controllers-84fb4d47d4-ffz4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfb9fe776a6", MAC:"ca:4c:50:89:b6:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:26.892026 containerd[1476]: 2025-02-13 20:11:26.883 [INFO][4151] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Namespace="calico-system" Pod="calico-kube-controllers-84fb4d47d4-ffz4k" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:26.941852 systemd-networkd[1385]: cali95405cddb01: Link UP Feb 13 20:11:26.944380 systemd-networkd[1385]: cali95405cddb01: Gained carrier Feb 13 20:11:26.961480 containerd[1476]: time="2025-02-13T20:11:26.960303686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:11:26.961480 containerd[1476]: time="2025-02-13T20:11:26.960391525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:11:26.961480 containerd[1476]: time="2025-02-13T20:11:26.960418497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:26.961480 containerd[1476]: time="2025-02-13T20:11:26.960557092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.727 [INFO][4150] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0 coredns-7db6d8ff4d- kube-system 2971b30d-28b2-4542-90ae-cd3031359da7 833 0 2025-02-13 20:10:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal coredns-7db6d8ff4d-9nqgc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95405cddb01 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nqgc" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.727 [INFO][4150] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nqgc" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.790 [INFO][4173] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" HandleID="k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.806 [INFO][4173] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" HandleID="k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291b50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-9nqgc", "timestamp":"2025-02-13 20:11:26.79059327 +0000 UTC"}, Hostname:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.806 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.839 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.840 [INFO][4173] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.843 [INFO][4173] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.855 [INFO][4173] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.876 [INFO][4173] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.882 [INFO][4173] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.890 [INFO][4173] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.890 [INFO][4173] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.895 [INFO][4173] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0 Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.907 [INFO][4173] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.921 [INFO][4173] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.131/26] block=192.168.62.128/26 handle="k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.922 [INFO][4173] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.131/26] handle="k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.922 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:27.001632 containerd[1476]: 2025-02-13 20:11:26.922 [INFO][4173] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.131/26] IPv6=[] ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" HandleID="k8s-pod-network.f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:27.003938 containerd[1476]: 2025-02-13 20:11:26.929 [INFO][4150] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nqgc" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2971b30d-28b2-4542-90ae-cd3031359da7", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-9nqgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95405cddb01", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:27.003938 containerd[1476]: 2025-02-13 20:11:26.932 [INFO][4150] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.131/32] ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nqgc" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:27.003938 containerd[1476]: 2025-02-13 20:11:26.933 [INFO][4150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95405cddb01 ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nqgc" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:27.003938 containerd[1476]: 2025-02-13 20:11:26.942 [INFO][4150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nqgc" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:27.003938 containerd[1476]: 2025-02-13 20:11:26.943 [INFO][4150] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nqgc" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2971b30d-28b2-4542-90ae-cd3031359da7", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0", Pod:"coredns-7db6d8ff4d-9nqgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95405cddb01", MAC:"52:52:70:bc:38:d9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:27.003938 containerd[1476]: 2025-02-13 20:11:26.980 [INFO][4150] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9nqgc" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:27.031419 systemd[1]: Started cri-containerd-1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91.scope - libcontainer container 1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91. Feb 13 20:11:27.067485 containerd[1476]: time="2025-02-13T20:11:27.067320740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:11:27.067485 containerd[1476]: time="2025-02-13T20:11:27.067399741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:11:27.067485 containerd[1476]: time="2025-02-13T20:11:27.067428244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:27.068847 containerd[1476]: time="2025-02-13T20:11:27.068368813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:27.103299 systemd[1]: Started cri-containerd-f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0.scope - libcontainer container f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0. Feb 13 20:11:27.146708 containerd[1476]: time="2025-02-13T20:11:27.146627923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fb4d47d4-ffz4k,Uid:d9574409-d923-4274-a785-828313eee44c,Namespace:calico-system,Attempt:1,} returns sandbox id \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\"" Feb 13 20:11:27.209826 containerd[1476]: time="2025-02-13T20:11:27.209188010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9nqgc,Uid:2971b30d-28b2-4542-90ae-cd3031359da7,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0\"" Feb 13 20:11:27.210966 systemd-networkd[1385]: caliad77f1fd1fe: Gained IPv6LL Feb 13 20:11:27.216750 containerd[1476]: time="2025-02-13T20:11:27.216699991Z" level=info msg="CreateContainer within sandbox \"f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:11:27.248521 containerd[1476]: time="2025-02-13T20:11:27.248449778Z" level=info msg="CreateContainer within sandbox \"f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a2068ed493828c96acb11c6078521980ec661eff48ddfb0e84a2e25546952566\"" Feb 13 20:11:27.251119 containerd[1476]: time="2025-02-13T20:11:27.251054737Z" level=info msg="StartContainer for \"a2068ed493828c96acb11c6078521980ec661eff48ddfb0e84a2e25546952566\"" Feb 13 20:11:27.312576 systemd[1]: Started cri-containerd-a2068ed493828c96acb11c6078521980ec661eff48ddfb0e84a2e25546952566.scope - libcontainer container a2068ed493828c96acb11c6078521980ec661eff48ddfb0e84a2e25546952566. Feb 13 20:11:27.369727 containerd[1476]: time="2025-02-13T20:11:27.369525089Z" level=info msg="StartContainer for \"a2068ed493828c96acb11c6078521980ec661eff48ddfb0e84a2e25546952566\" returns successfully" Feb 13 20:11:27.418418 containerd[1476]: time="2025-02-13T20:11:27.416609112Z" level=info msg="StopPodSandbox for \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\"" Feb 13 20:11:27.436247 containerd[1476]: time="2025-02-13T20:11:27.436031398Z" level=info msg="StopPodSandbox for \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\"" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.613 [INFO][4356] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.613 [INFO][4356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" iface="eth0" netns="/var/run/netns/cni-4c451704-ad23-a580-bafb-1afc3bc8ee2f" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.613 [INFO][4356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" iface="eth0" netns="/var/run/netns/cni-4c451704-ad23-a580-bafb-1afc3bc8ee2f" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.614 [INFO][4356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" iface="eth0" netns="/var/run/netns/cni-4c451704-ad23-a580-bafb-1afc3bc8ee2f" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.614 [INFO][4356] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.614 [INFO][4356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.690 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.691 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.691 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.702 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.703 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.707 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:27.717377 containerd[1476]: 2025-02-13 20:11:27.711 [INFO][4356] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:27.717377 containerd[1476]: time="2025-02-13T20:11:27.715030167Z" level=info msg="TearDown network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\" successfully" Feb 13 20:11:27.717377 containerd[1476]: time="2025-02-13T20:11:27.715119800Z" level=info msg="StopPodSandbox for \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\" returns successfully" Feb 13 20:11:27.730222 containerd[1476]: time="2025-02-13T20:11:27.729195794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657bdbf897-8br7r,Uid:30ffed34-3dac-4f10-aa92-96879d7c81be,Namespace:calico-apiserver,Attempt:1,}" Feb 13 20:11:27.722962 systemd[1]: run-netns-cni\x2d4c451704\x2dad23\x2da580\x2dbafb\x2d1afc3bc8ee2f.mount: Deactivated successfully. Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.622 [INFO][4352] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.623 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" iface="eth0" netns="/var/run/netns/cni-23672b64-6c38-665a-bd19-e602dab56a1b" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.624 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" iface="eth0" netns="/var/run/netns/cni-23672b64-6c38-665a-bd19-e602dab56a1b" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.626 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" iface="eth0" netns="/var/run/netns/cni-23672b64-6c38-665a-bd19-e602dab56a1b" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.626 [INFO][4352] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.626 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.702 [INFO][4371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.702 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.707 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.736 [WARNING][4371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.736 [INFO][4371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.738 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:27.742500 containerd[1476]: 2025-02-13 20:11:27.740 [INFO][4352] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:27.747738 containerd[1476]: time="2025-02-13T20:11:27.742703992Z" level=info msg="TearDown network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\" successfully" Feb 13 20:11:27.747738 containerd[1476]: time="2025-02-13T20:11:27.742739509Z" level=info msg="StopPodSandbox for \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\" returns successfully" Feb 13 20:11:27.747738 containerd[1476]: time="2025-02-13T20:11:27.745404078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnc68,Uid:00ae0c73-92db-4a9c-a76b-7c749f976739,Namespace:calico-system,Attempt:1,}" Feb 13 20:11:27.755523 systemd[1]: run-netns-cni\x2d23672b64\x2d6c38\x2d665a\x2dbd19\x2de602dab56a1b.mount: Deactivated successfully. Feb 13 20:11:27.777755 kubelet[2611]: I0213 20:11:27.777558 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9nqgc" podStartSLOduration=32.777530787 podStartE2EDuration="32.777530787s" podCreationTimestamp="2025-02-13 20:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:11:27.753395152 +0000 UTC m=+46.559078796" watchObservedRunningTime="2025-02-13 20:11:27.777530787 +0000 UTC m=+46.583214444" Feb 13 20:11:28.045856 systemd-networkd[1385]: cali95405cddb01: Gained IPv6LL Feb 13 20:11:28.148665 systemd-networkd[1385]: cali117899723a3: Link UP Feb 13 20:11:28.151165 systemd-networkd[1385]: cali117899723a3: Gained carrier Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:27.954 [INFO][4384] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0 calico-apiserver-657bdbf897- calico-apiserver 30ffed34-3dac-4f10-aa92-96879d7c81be 848 0 2025-02-13 20:11:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:657bdbf897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal calico-apiserver-657bdbf897-8br7r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali117899723a3 [] []}} ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-8br7r" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:27.955 [INFO][4384] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-8br7r" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.053 [INFO][4411] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" HandleID="k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.071 [INFO][4411] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" HandleID="k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef5e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", "pod":"calico-apiserver-657bdbf897-8br7r", "timestamp":"2025-02-13 20:11:28.053644202 +0000 UTC"}, Hostname:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.071 [INFO][4411] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.071 [INFO][4411] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.071 [INFO][4411] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.075 [INFO][4411] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.083 [INFO][4411] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.092 [INFO][4411] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.096 [INFO][4411] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.100 [INFO][4411] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.100 [INFO][4411] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.104 [INFO][4411] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.114 [INFO][4411] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.137 [INFO][4411] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.132/26] block=192.168.62.128/26 handle="k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.137 [INFO][4411] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.132/26] handle="k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.137 [INFO][4411] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:28.193371 containerd[1476]: 2025-02-13 20:11:28.137 [INFO][4411] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.132/26] IPv6=[] ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" HandleID="k8s-pod-network.b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:28.197191 containerd[1476]: 2025-02-13 20:11:28.141 [INFO][4384] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-8br7r" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0", GenerateName:"calico-apiserver-657bdbf897-", Namespace:"calico-apiserver", SelfLink:"", UID:"30ffed34-3dac-4f10-aa92-96879d7c81be", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657bdbf897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-657bdbf897-8br7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali117899723a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:28.197191 containerd[1476]: 2025-02-13 20:11:28.141 [INFO][4384] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.132/32] ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-8br7r" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:28.197191 containerd[1476]: 2025-02-13 20:11:28.141 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali117899723a3 ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-8br7r" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:28.197191 containerd[1476]: 2025-02-13 20:11:28.155 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-8br7r" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:28.197191 containerd[1476]: 2025-02-13 20:11:28.156 [INFO][4384] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-8br7r" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0", GenerateName:"calico-apiserver-657bdbf897-", Namespace:"calico-apiserver", SelfLink:"", UID:"30ffed34-3dac-4f10-aa92-96879d7c81be", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657bdbf897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e", Pod:"calico-apiserver-657bdbf897-8br7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali117899723a3", MAC:"d2:6c:04:6b:58:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:28.197191 containerd[1476]: 2025-02-13 20:11:28.188 [INFO][4384] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e" Namespace="calico-apiserver" Pod="calico-apiserver-657bdbf897-8br7r" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:28.289344 containerd[1476]: time="2025-02-13T20:11:28.288852922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:11:28.289344 containerd[1476]: time="2025-02-13T20:11:28.288929346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:11:28.289344 containerd[1476]: time="2025-02-13T20:11:28.288953247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:28.289344 containerd[1476]: time="2025-02-13T20:11:28.289083179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:28.289303 systemd-networkd[1385]: califa7c565f42a: Link UP Feb 13 20:11:28.296037 systemd-networkd[1385]: califa7c565f42a: Gained carrier Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.012 [INFO][4395] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0 csi-node-driver- calico-system 00ae0c73-92db-4a9c-a76b-7c749f976739 849 0 2025-02-13 20:11:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal csi-node-driver-bnc68 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califa7c565f42a [] []}} ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Namespace="calico-system" Pod="csi-node-driver-bnc68" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.013 [INFO][4395] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Namespace="calico-system" Pod="csi-node-driver-bnc68" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.134 [INFO][4417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" HandleID="k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.188 [INFO][4417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" HandleID="k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ac90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", "pod":"csi-node-driver-bnc68", "timestamp":"2025-02-13 20:11:28.133996152 +0000 UTC"}, Hostname:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.188 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.191 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.191 [INFO][4417] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.199 [INFO][4417] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.210 [INFO][4417] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.222 [INFO][4417] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.226 [INFO][4417] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.233 [INFO][4417] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.233 [INFO][4417] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.237 [INFO][4417] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9 Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.249 [INFO][4417] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.267 [INFO][4417] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.133/26] block=192.168.62.128/26 handle="k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.267 [INFO][4417] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.133/26] handle="k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.268 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:28.324580 containerd[1476]: 2025-02-13 20:11:28.270 [INFO][4417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.133/26] IPv6=[] ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" HandleID="k8s-pod-network.69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:28.325929 containerd[1476]: 2025-02-13 20:11:28.277 [INFO][4395] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Namespace="calico-system" Pod="csi-node-driver-bnc68" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00ae0c73-92db-4a9c-a76b-7c749f976739", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-bnc68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa7c565f42a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:28.325929 containerd[1476]: 2025-02-13 20:11:28.277 [INFO][4395] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.133/32] ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Namespace="calico-system" Pod="csi-node-driver-bnc68" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:28.325929 containerd[1476]: 2025-02-13 20:11:28.277 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa7c565f42a ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Namespace="calico-system" Pod="csi-node-driver-bnc68" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:28.325929 containerd[1476]: 2025-02-13 20:11:28.288 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Namespace="calico-system" Pod="csi-node-driver-bnc68" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:28.325929 containerd[1476]: 2025-02-13 20:11:28.289 [INFO][4395] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Namespace="calico-system" Pod="csi-node-driver-bnc68" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00ae0c73-92db-4a9c-a76b-7c749f976739", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9", Pod:"csi-node-driver-bnc68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa7c565f42a", MAC:"c6:ee:36:40:2e:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:28.325929 containerd[1476]: 2025-02-13 20:11:28.318 [INFO][4395] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9" Namespace="calico-system" Pod="csi-node-driver-bnc68" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:28.367987 systemd[1]: Started cri-containerd-b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e.scope - libcontainer container b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e. Feb 13 20:11:28.439397 containerd[1476]: time="2025-02-13T20:11:28.437979871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:11:28.439397 containerd[1476]: time="2025-02-13T20:11:28.438144425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:11:28.439397 containerd[1476]: time="2025-02-13T20:11:28.438182591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:28.439397 containerd[1476]: time="2025-02-13T20:11:28.438337813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:28.494140 systemd[1]: Started cri-containerd-69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9.scope - libcontainer container 69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9. Feb 13 20:11:28.586539 containerd[1476]: time="2025-02-13T20:11:28.586373575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657bdbf897-8br7r,Uid:30ffed34-3dac-4f10-aa92-96879d7c81be,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e\"" Feb 13 20:11:28.620742 containerd[1476]: time="2025-02-13T20:11:28.620565795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnc68,Uid:00ae0c73-92db-4a9c-a76b-7c749f976739,Namespace:calico-system,Attempt:1,} returns sandbox id \"69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9\"" Feb 13 20:11:28.747050 systemd-networkd[1385]: califfb9fe776a6: Gained IPv6LL Feb 13 20:11:29.423720 containerd[1476]: time="2025-02-13T20:11:29.423633702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:29.425363 containerd[1476]: time="2025-02-13T20:11:29.425267262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 20:11:29.426793 containerd[1476]: time="2025-02-13T20:11:29.426702623Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:29.432118 containerd[1476]: time="2025-02-13T20:11:29.430944839Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:29.432606 containerd[1476]: time="2025-02-13T20:11:29.432560459Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.540357344s" Feb 13 20:11:29.432853 containerd[1476]: time="2025-02-13T20:11:29.432820379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:11:29.436596 containerd[1476]: time="2025-02-13T20:11:29.435695588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 20:11:29.438471 containerd[1476]: time="2025-02-13T20:11:29.438430160Z" level=info msg="CreateContainer within sandbox \"e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:11:29.461429 containerd[1476]: time="2025-02-13T20:11:29.461286413Z" level=info msg="CreateContainer within sandbox \"e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0ad5863990cdcea4c3b99062add8aafb012f188d15d39a1e71aa5be3b9ddc4e2\"" Feb 13 20:11:29.463128 containerd[1476]: time="2025-02-13T20:11:29.462066979Z" level=info msg="StartContainer for \"0ad5863990cdcea4c3b99062add8aafb012f188d15d39a1e71aa5be3b9ddc4e2\"" Feb 13 20:11:29.523330 systemd[1]: Started cri-containerd-0ad5863990cdcea4c3b99062add8aafb012f188d15d39a1e71aa5be3b9ddc4e2.scope - libcontainer container 0ad5863990cdcea4c3b99062add8aafb012f188d15d39a1e71aa5be3b9ddc4e2. Feb 13 20:11:29.591525 containerd[1476]: time="2025-02-13T20:11:29.591348189Z" level=info msg="StartContainer for \"0ad5863990cdcea4c3b99062add8aafb012f188d15d39a1e71aa5be3b9ddc4e2\" returns successfully" Feb 13 20:11:29.706364 systemd-networkd[1385]: califa7c565f42a: Gained IPv6LL Feb 13 20:11:29.835195 systemd-networkd[1385]: cali117899723a3: Gained IPv6LL Feb 13 20:11:30.411291 containerd[1476]: time="2025-02-13T20:11:30.410292352Z" level=info msg="StopPodSandbox for \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\"" Feb 13 20:11:30.529720 kubelet[2611]: I0213 20:11:30.528750 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-657bdbf897-6tf2l" podStartSLOduration=24.985553818 podStartE2EDuration="28.528718924s" podCreationTimestamp="2025-02-13 20:11:02 +0000 UTC" firstStartedPulling="2025-02-13 20:11:25.891364429 +0000 UTC m=+44.697048059" lastFinishedPulling="2025-02-13 20:11:29.434529532 +0000 UTC m=+48.240213165" observedRunningTime="2025-02-13 20:11:29.773739769 +0000 UTC m=+48.579423463" watchObservedRunningTime="2025-02-13 20:11:30.528718924 +0000 UTC m=+49.334402568" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.528 [INFO][4600] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.528 [INFO][4600] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" iface="eth0" netns="/var/run/netns/cni-285306a8-402d-ebd2-080e-ea0957b0f10e" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.529 [INFO][4600] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" iface="eth0" netns="/var/run/netns/cni-285306a8-402d-ebd2-080e-ea0957b0f10e" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.529 [INFO][4600] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" iface="eth0" netns="/var/run/netns/cni-285306a8-402d-ebd2-080e-ea0957b0f10e" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.529 [INFO][4600] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.529 [INFO][4600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.573 [INFO][4606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.573 [INFO][4606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.574 [INFO][4606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.583 [WARNING][4606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.583 [INFO][4606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.587 [INFO][4606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:30.592122 containerd[1476]: 2025-02-13 20:11:30.589 [INFO][4600] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:30.597678 containerd[1476]: time="2025-02-13T20:11:30.592338522Z" level=info msg="TearDown network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\" successfully" Feb 13 20:11:30.597678 containerd[1476]: time="2025-02-13T20:11:30.592380716Z" level=info msg="StopPodSandbox for \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\" returns successfully" Feb 13 20:11:30.597678 containerd[1476]: time="2025-02-13T20:11:30.595713713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hv8vv,Uid:4d678112-f176-4f7e-ac90-2c076ea6a206,Namespace:kube-system,Attempt:1,}" Feb 13 20:11:30.604059 systemd[1]: run-netns-cni\x2d285306a8\x2d402d\x2debd2\x2d080e\x2dea0957b0f10e.mount: Deactivated successfully. Feb 13 20:11:30.764491 kubelet[2611]: I0213 20:11:30.759827 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:11:30.845515 systemd-networkd[1385]: calie25409806ef: Link UP Feb 13 20:11:30.849293 systemd-networkd[1385]: calie25409806ef: Gained carrier Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.699 [INFO][4612] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0 coredns-7db6d8ff4d- kube-system 4d678112-f176-4f7e-ac90-2c076ea6a206 880 0 2025-02-13 20:10:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal coredns-7db6d8ff4d-hv8vv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie25409806ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hv8vv" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.700 [INFO][4612] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hv8vv" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.755 [INFO][4623] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" HandleID="k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.773 [INFO][4623] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" HandleID="k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-hv8vv", "timestamp":"2025-02-13 20:11:30.755014718 +0000 UTC"}, Hostname:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.773 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.774 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.774 [INFO][4623] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal' Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.777 [INFO][4623] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.784 [INFO][4623] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.793 [INFO][4623] ipam/ipam.go 489: Trying affinity for 192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.805 [INFO][4623] ipam/ipam.go 155: Attempting to load block cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.810 [INFO][4623] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.62.128/26 host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.810 [INFO][4623] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.62.128/26 handle="k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.813 [INFO][4623] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.822 [INFO][4623] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.62.128/26 handle="k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.834 [INFO][4623] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.62.134/26] block=192.168.62.128/26 handle="k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.836 [INFO][4623] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.62.134/26] handle="k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" host="ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal" Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.836 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:30.881119 containerd[1476]: 2025-02-13 20:11:30.836 [INFO][4623] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.62.134/26] IPv6=[] ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" HandleID="k8s-pod-network.0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.884464 containerd[1476]: 2025-02-13 20:11:30.840 [INFO][4612] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hv8vv" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d678112-f176-4f7e-ac90-2c076ea6a206", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-hv8vv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie25409806ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:30.884464 containerd[1476]: 2025-02-13 20:11:30.840 [INFO][4612] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.62.134/32] ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hv8vv" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.884464 containerd[1476]: 2025-02-13 20:11:30.840 [INFO][4612] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie25409806ef ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hv8vv" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.884464 containerd[1476]: 2025-02-13 20:11:30.848 [INFO][4612] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hv8vv" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.884464 containerd[1476]: 2025-02-13 20:11:30.849 [INFO][4612] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hv8vv" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d678112-f176-4f7e-ac90-2c076ea6a206", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a", Pod:"coredns-7db6d8ff4d-hv8vv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie25409806ef", MAC:"96:98:8c:3f:3e:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:30.884464 containerd[1476]: 2025-02-13 20:11:30.868 [INFO][4612] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hv8vv" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:30.962148 containerd[1476]: time="2025-02-13T20:11:30.962000735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:11:30.962952 containerd[1476]: time="2025-02-13T20:11:30.962705067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:11:30.964145 containerd[1476]: time="2025-02-13T20:11:30.962879371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:30.964575 containerd[1476]: time="2025-02-13T20:11:30.964425570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:11:31.044395 systemd[1]: Started cri-containerd-0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a.scope - libcontainer container 0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a. Feb 13 20:11:31.144233 containerd[1476]: time="2025-02-13T20:11:31.144181232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hv8vv,Uid:4d678112-f176-4f7e-ac90-2c076ea6a206,Namespace:kube-system,Attempt:1,} returns sandbox id \"0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a\"" Feb 13 20:11:31.152072 containerd[1476]: time="2025-02-13T20:11:31.152025029Z" level=info msg="CreateContainer within sandbox \"0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:11:31.193121 containerd[1476]: time="2025-02-13T20:11:31.192386356Z" level=info msg="CreateContainer within sandbox \"0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"593b799017f8a5c91b7cbc1454fe7ca90e0e70461b9327f38e7127b9b3c1d947\"" Feb 13 20:11:31.193708 containerd[1476]: time="2025-02-13T20:11:31.193672930Z" level=info msg="StartContainer for \"593b799017f8a5c91b7cbc1454fe7ca90e0e70461b9327f38e7127b9b3c1d947\"" Feb 13 20:11:31.272328 systemd[1]: Started cri-containerd-593b799017f8a5c91b7cbc1454fe7ca90e0e70461b9327f38e7127b9b3c1d947.scope - libcontainer container 593b799017f8a5c91b7cbc1454fe7ca90e0e70461b9327f38e7127b9b3c1d947. Feb 13 20:11:31.374973 containerd[1476]: time="2025-02-13T20:11:31.372881827Z" level=info msg="StartContainer for \"593b799017f8a5c91b7cbc1454fe7ca90e0e70461b9327f38e7127b9b3c1d947\" returns successfully" Feb 13 20:11:31.826278 kubelet[2611]: I0213 20:11:31.825944 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hv8vv" podStartSLOduration=36.825916521 podStartE2EDuration="36.825916521s" podCreationTimestamp="2025-02-13 20:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:11:31.799984 +0000 UTC m=+50.605667647" watchObservedRunningTime="2025-02-13 20:11:31.825916521 +0000 UTC m=+50.631600166" Feb 13 20:11:32.463801 containerd[1476]: time="2025-02-13T20:11:32.463732310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:32.465683 containerd[1476]: time="2025-02-13T20:11:32.465607033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 20:11:32.467275 containerd[1476]: time="2025-02-13T20:11:32.467199417Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:32.471238 containerd[1476]: time="2025-02-13T20:11:32.471039123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:32.473102 containerd[1476]: time="2025-02-13T20:11:32.472268970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.036520126s" Feb 13 20:11:32.473102 containerd[1476]: time="2025-02-13T20:11:32.472320147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 20:11:32.475791 containerd[1476]: time="2025-02-13T20:11:32.475755366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 20:11:32.498707 containerd[1476]: time="2025-02-13T20:11:32.498634867Z" level=info msg="CreateContainer within sandbox \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 20:11:32.535621 containerd[1476]: time="2025-02-13T20:11:32.535555051Z" level=info msg="CreateContainer within sandbox \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\"" Feb 13 20:11:32.546230 containerd[1476]: time="2025-02-13T20:11:32.545335828Z" level=info msg="StartContainer for \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\"" Feb 13 20:11:32.602450 systemd[1]: Started cri-containerd-2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489.scope - libcontainer container 2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489. Feb 13 20:11:32.650915 systemd-networkd[1385]: calie25409806ef: Gained IPv6LL Feb 13 20:11:32.725187 containerd[1476]: time="2025-02-13T20:11:32.724416113Z" level=info msg="StartContainer for \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\" returns successfully" Feb 13 20:11:32.829857 kubelet[2611]: I0213 20:11:32.829688 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84fb4d47d4-ffz4k" podStartSLOduration=25.506290274 podStartE2EDuration="30.829656289s" podCreationTimestamp="2025-02-13 20:11:02 +0000 UTC" firstStartedPulling="2025-02-13 20:11:27.150130843 +0000 UTC m=+45.955814468" lastFinishedPulling="2025-02-13 20:11:32.473496845 +0000 UTC m=+51.279180483" observedRunningTime="2025-02-13 20:11:32.824375086 +0000 UTC m=+51.630058730" watchObservedRunningTime="2025-02-13 20:11:32.829656289 +0000 UTC m=+51.635339935" Feb 13 20:11:32.839149 containerd[1476]: time="2025-02-13T20:11:32.837038671Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:32.842005 containerd[1476]: time="2025-02-13T20:11:32.841852589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 20:11:32.857015 containerd[1476]: time="2025-02-13T20:11:32.856948618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 380.934284ms" Feb 13 20:11:32.857015 containerd[1476]: time="2025-02-13T20:11:32.857014846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 20:11:32.859526 containerd[1476]: time="2025-02-13T20:11:32.859483761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:11:32.864238 containerd[1476]: time="2025-02-13T20:11:32.863225707Z" level=info msg="CreateContainer within sandbox \"b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 20:11:32.892481 containerd[1476]: time="2025-02-13T20:11:32.892418915Z" level=info msg="CreateContainer within sandbox \"b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3f162d6e41755e2167023fa2b17bb9db254c3a8ef58b9423e00bf36ff90e660\"" Feb 13 20:11:32.898127 containerd[1476]: time="2025-02-13T20:11:32.894479415Z" level=info msg="StartContainer for \"d3f162d6e41755e2167023fa2b17bb9db254c3a8ef58b9423e00bf36ff90e660\"" Feb 13 20:11:33.011398 systemd[1]: Started cri-containerd-d3f162d6e41755e2167023fa2b17bb9db254c3a8ef58b9423e00bf36ff90e660.scope - libcontainer container d3f162d6e41755e2167023fa2b17bb9db254c3a8ef58b9423e00bf36ff90e660. Feb 13 20:11:33.144707 containerd[1476]: time="2025-02-13T20:11:33.144035814Z" level=info msg="StartContainer for \"d3f162d6e41755e2167023fa2b17bb9db254c3a8ef58b9423e00bf36ff90e660\" returns successfully" Feb 13 20:11:34.392514 containerd[1476]: time="2025-02-13T20:11:34.392446805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:34.395335 containerd[1476]: time="2025-02-13T20:11:34.395251557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:11:34.397436 containerd[1476]: time="2025-02-13T20:11:34.396492487Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:34.406954 containerd[1476]: time="2025-02-13T20:11:34.406796030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:34.408529 containerd[1476]: time="2025-02-13T20:11:34.408442694Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.548908916s" Feb 13 20:11:34.408529 containerd[1476]: time="2025-02-13T20:11:34.408510706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:11:34.422712 containerd[1476]: time="2025-02-13T20:11:34.421565487Z" level=info msg="CreateContainer within sandbox \"69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:11:34.455203 containerd[1476]: time="2025-02-13T20:11:34.454655415Z" level=info msg="CreateContainer within sandbox \"69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a7c644769894ec26d80de5989e33b88db266e7b6b6220b66a16c1c5e7ecf490f\"" Feb 13 20:11:34.456337 containerd[1476]: time="2025-02-13T20:11:34.456287426Z" level=info msg="StartContainer for \"a7c644769894ec26d80de5989e33b88db266e7b6b6220b66a16c1c5e7ecf490f\"" Feb 13 20:11:34.562388 systemd[1]: Started cri-containerd-a7c644769894ec26d80de5989e33b88db266e7b6b6220b66a16c1c5e7ecf490f.scope - libcontainer container a7c644769894ec26d80de5989e33b88db266e7b6b6220b66a16c1c5e7ecf490f. Feb 13 20:11:34.633154 containerd[1476]: time="2025-02-13T20:11:34.632869580Z" level=info msg="StartContainer for \"a7c644769894ec26d80de5989e33b88db266e7b6b6220b66a16c1c5e7ecf490f\" returns successfully" Feb 13 20:11:34.637154 containerd[1476]: time="2025-02-13T20:11:34.636782065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:11:34.792810 kubelet[2611]: I0213 20:11:34.791785 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:11:35.057797 ntpd[1436]: Listen normally on 7 vxlan.calico 192.168.62.128:123 Feb 13 20:11:35.058502 ntpd[1436]: 13 Feb 20:11:35 ntpd[1436]: Listen normally on 7 vxlan.calico 192.168.62.128:123 Feb 13 20:11:35.058502 ntpd[1436]: 13 Feb 20:11:35 ntpd[1436]: Listen normally on 8 vxlan.calico [fe80::64f2:44ff:fe37:af8a%4]:123 Feb 13 20:11:35.058502 ntpd[1436]: 13 Feb 20:11:35 ntpd[1436]: Listen normally on 9 caliad77f1fd1fe [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 20:11:35.058502 ntpd[1436]: 13 Feb 20:11:35 ntpd[1436]: Listen normally on 10 califfb9fe776a6 [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 20:11:35.058502 ntpd[1436]: 13 Feb 20:11:35 ntpd[1436]: Listen normally on 11 cali95405cddb01 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 20:11:35.057928 ntpd[1436]: Listen normally on 8 vxlan.calico [fe80::64f2:44ff:fe37:af8a%4]:123 Feb 13 20:11:35.058946 ntpd[1436]: 13 Feb 20:11:35 ntpd[1436]: Listen normally on 12 cali117899723a3 [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 20:11:35.058946 ntpd[1436]: 13 Feb 20:11:35 ntpd[1436]: Listen normally on 13 califa7c565f42a [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 20:11:35.058946 ntpd[1436]: 13 Feb 20:11:35 ntpd[1436]: Listen normally on 14 calie25409806ef [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 20:11:35.058299 ntpd[1436]: Listen normally on 9 caliad77f1fd1fe [fe80::ecee:eeff:feee:eeee%7]:123 Feb 13 20:11:35.058395 ntpd[1436]: Listen normally on 10 califfb9fe776a6 [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 20:11:35.058464 ntpd[1436]: Listen normally on 11 cali95405cddb01 [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 20:11:35.058527 ntpd[1436]: Listen normally on 12 cali117899723a3 [fe80::ecee:eeff:feee:eeee%10]:123 Feb 13 20:11:35.058592 ntpd[1436]: Listen normally on 13 califa7c565f42a [fe80::ecee:eeff:feee:eeee%11]:123 Feb 13 20:11:35.058664 ntpd[1436]: Listen normally on 14 calie25409806ef [fe80::ecee:eeff:feee:eeee%12]:123 Feb 13 20:11:36.081926 containerd[1476]: time="2025-02-13T20:11:36.081786395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:36.085542 containerd[1476]: time="2025-02-13T20:11:36.085418975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:11:36.086188 containerd[1476]: time="2025-02-13T20:11:36.086017867Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:36.093872 containerd[1476]: time="2025-02-13T20:11:36.093750098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:11:36.097175 containerd[1476]: time="2025-02-13T20:11:36.097010659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.460162468s" Feb 13 20:11:36.097872 containerd[1476]: time="2025-02-13T20:11:36.097828595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:11:36.105562 containerd[1476]: time="2025-02-13T20:11:36.105466142Z" level=info msg="CreateContainer within sandbox \"69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:11:36.132718 containerd[1476]: time="2025-02-13T20:11:36.132656496Z" level=info msg="CreateContainer within sandbox \"69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1beb45c79b77fc71e9d815e34b2f05d1697901162c90b13d7bd3eeca3670fc43\"" Feb 13 20:11:36.134265 containerd[1476]: time="2025-02-13T20:11:36.133591839Z" level=info msg="StartContainer for \"1beb45c79b77fc71e9d815e34b2f05d1697901162c90b13d7bd3eeca3670fc43\"" Feb 13 20:11:36.223517 systemd[1]: Started cri-containerd-1beb45c79b77fc71e9d815e34b2f05d1697901162c90b13d7bd3eeca3670fc43.scope - libcontainer container 1beb45c79b77fc71e9d815e34b2f05d1697901162c90b13d7bd3eeca3670fc43. Feb 13 20:11:36.288837 containerd[1476]: time="2025-02-13T20:11:36.288637750Z" level=info msg="StartContainer for \"1beb45c79b77fc71e9d815e34b2f05d1697901162c90b13d7bd3eeca3670fc43\" returns successfully" Feb 13 20:11:36.563690 kubelet[2611]: I0213 20:11:36.563618 2611 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:11:36.563690 kubelet[2611]: I0213 20:11:36.563669 2611 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:11:36.816294 kubelet[2611]: I0213 20:11:36.816064 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-657bdbf897-8br7r" podStartSLOduration=30.546801975 podStartE2EDuration="34.81603646s" podCreationTimestamp="2025-02-13 20:11:02 +0000 UTC" firstStartedPulling="2025-02-13 20:11:28.589947286 +0000 UTC m=+47.395630918" lastFinishedPulling="2025-02-13 20:11:32.85918177 +0000 UTC m=+51.664865403" observedRunningTime="2025-02-13 20:11:33.804013734 +0000 UTC m=+52.609697382" watchObservedRunningTime="2025-02-13 20:11:36.81603646 +0000 UTC m=+55.621720104" Feb 13 20:11:36.817064 kubelet[2611]: I0213 20:11:36.816519 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bnc68" podStartSLOduration=27.339931326 podStartE2EDuration="34.81650389s" podCreationTimestamp="2025-02-13 20:11:02 +0000 UTC" firstStartedPulling="2025-02-13 20:11:28.62388062 +0000 UTC m=+47.429564253" lastFinishedPulling="2025-02-13 20:11:36.100453195 +0000 UTC m=+54.906136817" observedRunningTime="2025-02-13 20:11:36.81576597 +0000 UTC m=+55.621449624" watchObservedRunningTime="2025-02-13 20:11:36.81650389 +0000 UTC m=+55.622187535" Feb 13 20:11:41.387166 containerd[1476]: time="2025-02-13T20:11:41.386586914Z" level=info msg="StopPodSandbox for \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\"" Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.448 [WARNING][4932] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0", GenerateName:"calico-apiserver-657bdbf897-", Namespace:"calico-apiserver", SelfLink:"", UID:"790c11b2-49c7-42d3-9545-23dd53e0fde9", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657bdbf897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62", Pod:"calico-apiserver-657bdbf897-6tf2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad77f1fd1fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.449 [INFO][4932] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.449 [INFO][4932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" iface="eth0" netns="" Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.449 [INFO][4932] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.449 [INFO][4932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.482 [INFO][4941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.482 [INFO][4941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.482 [INFO][4941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.489 [WARNING][4941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.489 [INFO][4941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.492 [INFO][4941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:41.496782 containerd[1476]: 2025-02-13 20:11:41.494 [INFO][4932] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:41.497793 containerd[1476]: time="2025-02-13T20:11:41.496859631Z" level=info msg="TearDown network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\" successfully" Feb 13 20:11:41.497793 containerd[1476]: time="2025-02-13T20:11:41.496897997Z" level=info msg="StopPodSandbox for \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\" returns successfully" Feb 13 20:11:41.498048 containerd[1476]: time="2025-02-13T20:11:41.498000872Z" level=info msg="RemovePodSandbox for \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\"" Feb 13 20:11:41.498371 containerd[1476]: time="2025-02-13T20:11:41.498060690Z" level=info msg="Forcibly stopping sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\"" Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.555 [WARNING][4959] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0", GenerateName:"calico-apiserver-657bdbf897-", Namespace:"calico-apiserver", SelfLink:"", UID:"790c11b2-49c7-42d3-9545-23dd53e0fde9", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657bdbf897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"e5291a757c506b3a7f871435d5d937998f6cad2679ba11043fd21a06d5e00b62", Pod:"calico-apiserver-657bdbf897-6tf2l", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad77f1fd1fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.556 [INFO][4959] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.556 [INFO][4959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" iface="eth0" netns="" Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.556 [INFO][4959] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.556 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.587 [INFO][4965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.587 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.587 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.594 [WARNING][4965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.594 [INFO][4965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" HandleID="k8s-pod-network.67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--6tf2l-eth0" Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.596 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:41.602003 containerd[1476]: 2025-02-13 20:11:41.599 [INFO][4959] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9" Feb 13 20:11:41.603109 containerd[1476]: time="2025-02-13T20:11:41.602167888Z" level=info msg="TearDown network for sandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\" successfully" Feb 13 20:11:41.909756 containerd[1476]: time="2025-02-13T20:11:41.909233733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:41.909756 containerd[1476]: time="2025-02-13T20:11:41.909366018Z" level=info msg="RemovePodSandbox \"67192209148f2e96b30f128fbdc566fdf199fe6cb26492b0914665eb39aa2ab9\" returns successfully" Feb 13 20:11:41.911946 containerd[1476]: time="2025-02-13T20:11:41.911410628Z" level=info msg="StopPodSandbox for \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\"" Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:41.973 [WARNING][4986] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2971b30d-28b2-4542-90ae-cd3031359da7", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0", Pod:"coredns-7db6d8ff4d-9nqgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95405cddb01", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:41.974 [INFO][4986] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:41.975 [INFO][4986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" iface="eth0" netns="" Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:41.976 [INFO][4986] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:41.976 [INFO][4986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:42.040 [INFO][4992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:42.040 [INFO][4992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:42.040 [INFO][4992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:42.066 [WARNING][4992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:42.066 [INFO][4992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:42.070 [INFO][4992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:42.077143 containerd[1476]: 2025-02-13 20:11:42.073 [INFO][4986] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:42.079654 containerd[1476]: time="2025-02-13T20:11:42.079355348Z" level=info msg="TearDown network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\" successfully" Feb 13 20:11:42.079654 containerd[1476]: time="2025-02-13T20:11:42.079408328Z" level=info msg="StopPodSandbox for \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\" returns successfully" Feb 13 20:11:42.081918 containerd[1476]: time="2025-02-13T20:11:42.081363215Z" level=info msg="RemovePodSandbox for \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\"" Feb 13 20:11:42.081918 containerd[1476]: time="2025-02-13T20:11:42.081412263Z" level=info msg="Forcibly stopping sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\"" Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.188 [WARNING][5010] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2971b30d-28b2-4542-90ae-cd3031359da7", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"f6ca37f02a419b47b33b24f0f53581ed645c888a8cbab3cdfeca4bd712f3f5c0", Pod:"coredns-7db6d8ff4d-9nqgc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95405cddb01", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.188 [INFO][5010] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.188 [INFO][5010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" iface="eth0" netns="" Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.188 [INFO][5010] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.189 [INFO][5010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.229 [INFO][5018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.230 [INFO][5018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.230 [INFO][5018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.240 [WARNING][5018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.240 [INFO][5018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" HandleID="k8s-pod-network.18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--9nqgc-eth0" Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.242 [INFO][5018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:42.246027 containerd[1476]: 2025-02-13 20:11:42.244 [INFO][5010] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9" Feb 13 20:11:42.246027 containerd[1476]: time="2025-02-13T20:11:42.245918090Z" level=info msg="TearDown network for sandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\" successfully" Feb 13 20:11:42.253932 containerd[1476]: time="2025-02-13T20:11:42.253485558Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:42.253932 containerd[1476]: time="2025-02-13T20:11:42.253677281Z" level=info msg="RemovePodSandbox \"18fc040e7cbbdac1510acf4bb1f235d3d55d4de3700c0f9ef722bf99951b56b9\" returns successfully" Feb 13 20:11:42.255673 containerd[1476]: time="2025-02-13T20:11:42.255160235Z" level=info msg="StopPodSandbox for \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\"" Feb 13 20:11:42.310583 systemd[1]: Started sshd@8-10.128.0.67:22-139.178.89.65:47922.service - OpenSSH per-connection server daemon (139.178.89.65:47922). Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.320 [WARNING][5038] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0", GenerateName:"calico-apiserver-657bdbf897-", Namespace:"calico-apiserver", SelfLink:"", UID:"30ffed34-3dac-4f10-aa92-96879d7c81be", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657bdbf897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e", Pod:"calico-apiserver-657bdbf897-8br7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali117899723a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.320 [INFO][5038] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.321 [INFO][5038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" iface="eth0" netns="" Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.321 [INFO][5038] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.321 [INFO][5038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.360 [INFO][5046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.361 [INFO][5046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.361 [INFO][5046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.370 [WARNING][5046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.370 [INFO][5046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.373 [INFO][5046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:42.376890 containerd[1476]: 2025-02-13 20:11:42.375 [INFO][5038] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:42.378615 containerd[1476]: time="2025-02-13T20:11:42.376892363Z" level=info msg="TearDown network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\" successfully" Feb 13 20:11:42.378615 containerd[1476]: time="2025-02-13T20:11:42.376931374Z" level=info msg="StopPodSandbox for \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\" returns successfully" Feb 13 20:11:42.378615 containerd[1476]: time="2025-02-13T20:11:42.377914226Z" level=info msg="RemovePodSandbox for \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\"" Feb 13 20:11:42.378615 containerd[1476]: time="2025-02-13T20:11:42.377956496Z" level=info msg="Forcibly stopping sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\"" Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.432 [WARNING][5065] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0", GenerateName:"calico-apiserver-657bdbf897-", Namespace:"calico-apiserver", SelfLink:"", UID:"30ffed34-3dac-4f10-aa92-96879d7c81be", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657bdbf897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"b3281740226cdfb6bb2b3948cb5ad942b768ec62c5c49950dc56f46486a8d58e", Pod:"calico-apiserver-657bdbf897-8br7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali117899723a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.433 [INFO][5065] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.433 [INFO][5065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" iface="eth0" netns="" Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.433 [INFO][5065] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.433 [INFO][5065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.468 [INFO][5071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.468 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.468 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.475 [WARNING][5071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.475 [INFO][5071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" HandleID="k8s-pod-network.c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--apiserver--657bdbf897--8br7r-eth0" Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.477 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:42.481306 containerd[1476]: 2025-02-13 20:11:42.479 [INFO][5065] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec" Feb 13 20:11:42.483034 containerd[1476]: time="2025-02-13T20:11:42.481347984Z" level=info msg="TearDown network for sandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\" successfully" Feb 13 20:11:42.487587 containerd[1476]: time="2025-02-13T20:11:42.487488788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:42.487862 containerd[1476]: time="2025-02-13T20:11:42.487609288Z" level=info msg="RemovePodSandbox \"c80953ae6360cdb5968062f8aab7d41ecfacf7c7263e0f20a4ee6c688e9d04ec\" returns successfully" Feb 13 20:11:42.488643 containerd[1476]: time="2025-02-13T20:11:42.488602887Z" level=info msg="StopPodSandbox for \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\"" Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.574 [WARNING][5089] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d678112-f176-4f7e-ac90-2c076ea6a206", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a", Pod:"coredns-7db6d8ff4d-hv8vv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie25409806ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.575 [INFO][5089] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.575 [INFO][5089] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" iface="eth0" netns="" Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.575 [INFO][5089] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.575 [INFO][5089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.606 [INFO][5095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.606 [INFO][5095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.607 [INFO][5095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.615 [WARNING][5095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.615 [INFO][5095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.618 [INFO][5095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:42.623324 containerd[1476]: 2025-02-13 20:11:42.620 [INFO][5089] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:42.627255 containerd[1476]: time="2025-02-13T20:11:42.623359234Z" level=info msg="TearDown network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\" successfully" Feb 13 20:11:42.627255 containerd[1476]: time="2025-02-13T20:11:42.623403682Z" level=info msg="StopPodSandbox for \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\" returns successfully" Feb 13 20:11:42.627255 containerd[1476]: time="2025-02-13T20:11:42.624457577Z" level=info msg="RemovePodSandbox for \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\"" Feb 13 20:11:42.627255 containerd[1476]: time="2025-02-13T20:11:42.624513220Z" level=info msg="Forcibly stopping sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\"" Feb 13 20:11:42.634699 sshd[5045]: Accepted publickey for core from 139.178.89.65 port 47922 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:11:42.637745 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:42.650031 systemd-logind[1446]: New session 8 of user core. Feb 13 20:11:42.657490 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.691 [WARNING][5113] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d678112-f176-4f7e-ac90-2c076ea6a206", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 10, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"0dfc8a21b340813ef5cd50646f4c28a647fba2fb7340edfbc497ba09e477521a", Pod:"coredns-7db6d8ff4d-hv8vv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie25409806ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.691 [INFO][5113] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.691 [INFO][5113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" iface="eth0" netns="" Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.691 [INFO][5113] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.691 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.721 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.721 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.721 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.731 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.731 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" HandleID="k8s-pod-network.a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--hv8vv-eth0" Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.733 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:42.737424 containerd[1476]: 2025-02-13 20:11:42.735 [INFO][5113] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec" Feb 13 20:11:42.737424 containerd[1476]: time="2025-02-13T20:11:42.737352797Z" level=info msg="TearDown network for sandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\" successfully" Feb 13 20:11:42.742789 containerd[1476]: time="2025-02-13T20:11:42.742727179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:42.742789 containerd[1476]: time="2025-02-13T20:11:42.742831527Z" level=info msg="RemovePodSandbox \"a8dff7c7fed6a6ec2b947df471b62a40ac1bb33b9cfcf98de0527026b29f37ec\" returns successfully" Feb 13 20:11:42.743641 containerd[1476]: time="2025-02-13T20:11:42.743595346Z" level=info msg="StopPodSandbox for \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\"" Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.793 [WARNING][5138] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0", GenerateName:"calico-kube-controllers-84fb4d47d4-", Namespace:"calico-system", SelfLink:"", UID:"d9574409-d923-4274-a785-828313eee44c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fb4d47d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91", Pod:"calico-kube-controllers-84fb4d47d4-ffz4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfb9fe776a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.794 [INFO][5138] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.794 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" iface="eth0" netns="" Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.794 [INFO][5138] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.794 [INFO][5138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.856 [INFO][5144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.857 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.857 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.874 [WARNING][5144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.874 [INFO][5144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.877 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:42.887149 containerd[1476]: 2025-02-13 20:11:42.881 [INFO][5138] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:42.887149 containerd[1476]: time="2025-02-13T20:11:42.885573957Z" level=info msg="TearDown network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\" successfully" Feb 13 20:11:42.887149 containerd[1476]: time="2025-02-13T20:11:42.885611840Z" level=info msg="StopPodSandbox for \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\" returns successfully" Feb 13 20:11:42.891954 containerd[1476]: time="2025-02-13T20:11:42.889998375Z" level=info msg="RemovePodSandbox for \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\"" Feb 13 20:11:42.891954 containerd[1476]: time="2025-02-13T20:11:42.890054592Z" level=info msg="Forcibly stopping sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\"" Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:42.972 [WARNING][5171] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0", GenerateName:"calico-kube-controllers-84fb4d47d4-", Namespace:"calico-system", SelfLink:"", UID:"d9574409-d923-4274-a785-828313eee44c", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fb4d47d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91", Pod:"calico-kube-controllers-84fb4d47d4-ffz4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfb9fe776a6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:42.973 [INFO][5171] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:42.973 [INFO][5171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" iface="eth0" netns="" Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:42.973 [INFO][5171] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:42.974 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:43.012 [INFO][5178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:43.012 [INFO][5178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:43.012 [INFO][5178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:43.021 [WARNING][5178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:43.021 [INFO][5178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" HandleID="k8s-pod-network.577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:43.023 [INFO][5178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:43.026721 containerd[1476]: 2025-02-13 20:11:43.024 [INFO][5171] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399" Feb 13 20:11:43.028419 containerd[1476]: time="2025-02-13T20:11:43.026748914Z" level=info msg="TearDown network for sandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\" successfully" Feb 13 20:11:43.033151 containerd[1476]: time="2025-02-13T20:11:43.033011504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:43.033151 containerd[1476]: time="2025-02-13T20:11:43.033151930Z" level=info msg="RemovePodSandbox \"577a0ad8448d0a1eef01224753e6f04c1a58e1a38965f68c3c7ee26d00d4a399\" returns successfully" Feb 13 20:11:43.034056 containerd[1476]: time="2025-02-13T20:11:43.034001630Z" level=info msg="StopPodSandbox for \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\"" Feb 13 20:11:43.034711 sshd[5045]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:43.042495 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:11:43.043706 systemd[1]: sshd@8-10.128.0.67:22-139.178.89.65:47922.service: Deactivated successfully. Feb 13 20:11:43.048738 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:11:43.054985 systemd-logind[1446]: Removed session 8. Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.096 [WARNING][5198] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00ae0c73-92db-4a9c-a76b-7c749f976739", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9", Pod:"csi-node-driver-bnc68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa7c565f42a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.097 [INFO][5198] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.097 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" iface="eth0" netns="" Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.097 [INFO][5198] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.097 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.128 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.128 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.128 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.136 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.136 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.138 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:43.143024 containerd[1476]: 2025-02-13 20:11:43.140 [INFO][5198] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:43.143024 containerd[1476]: time="2025-02-13T20:11:43.142960861Z" level=info msg="TearDown network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\" successfully" Feb 13 20:11:43.143024 containerd[1476]: time="2025-02-13T20:11:43.143003006Z" level=info msg="StopPodSandbox for \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\" returns successfully" Feb 13 20:11:43.144339 containerd[1476]: time="2025-02-13T20:11:43.144157452Z" level=info msg="RemovePodSandbox for \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\"" Feb 13 20:11:43.144472 containerd[1476]: time="2025-02-13T20:11:43.144351995Z" level=info msg="Forcibly stopping sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\"" Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.203 [WARNING][5223] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00ae0c73-92db-4a9c-a76b-7c749f976739", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 11, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal", ContainerID:"69f4eb289f28edbb7705d37b01b0f615b3c972b52b7fb2dbc9aa39b15331e6a9", Pod:"csi-node-driver-bnc68", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califa7c565f42a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.204 [INFO][5223] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.204 [INFO][5223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" iface="eth0" netns="" Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.204 [INFO][5223] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.204 [INFO][5223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.235 [INFO][5229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.235 [INFO][5229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.236 [INFO][5229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.243 [WARNING][5229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.244 [INFO][5229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" HandleID="k8s-pod-network.6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-csi--node--driver--bnc68-eth0" Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.245 [INFO][5229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:11:43.250059 containerd[1476]: 2025-02-13 20:11:43.247 [INFO][5223] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34" Feb 13 20:11:43.251767 containerd[1476]: time="2025-02-13T20:11:43.250154990Z" level=info msg="TearDown network for sandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\" successfully" Feb 13 20:11:43.255584 containerd[1476]: time="2025-02-13T20:11:43.255519869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:11:43.255791 containerd[1476]: time="2025-02-13T20:11:43.255606850Z" level=info msg="RemovePodSandbox \"6117e8482fe6fc4f0b96bc213a13cbad2563134cb4cfa9ac092ea0b21d3e8b34\" returns successfully" Feb 13 20:11:43.721310 kubelet[2611]: I0213 20:11:43.720877 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:11:48.095642 systemd[1]: Started sshd@9-10.128.0.67:22-139.178.89.65:37888.service - OpenSSH per-connection server daemon (139.178.89.65:37888). Feb 13 20:11:48.395332 sshd[5266]: Accepted publickey for core from 139.178.89.65 port 37888 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:11:48.397527 sshd[5266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:48.404202 systemd-logind[1446]: New session 9 of user core. Feb 13 20:11:48.411423 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:11:48.714510 sshd[5266]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:48.720464 systemd[1]: sshd@9-10.128.0.67:22-139.178.89.65:37888.service: Deactivated successfully. Feb 13 20:11:48.723884 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:11:48.726882 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:11:48.728802 systemd-logind[1446]: Removed session 9. Feb 13 20:11:53.770621 systemd[1]: Started sshd@10-10.128.0.67:22-139.178.89.65:37904.service - OpenSSH per-connection server daemon (139.178.89.65:37904). Feb 13 20:11:54.070598 sshd[5305]: Accepted publickey for core from 139.178.89.65 port 37904 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:11:54.072817 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:54.080844 systemd-logind[1446]: New session 10 of user core. Feb 13 20:11:54.086414 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:11:54.369885 sshd[5305]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:54.376738 systemd[1]: sshd@10-10.128.0.67:22-139.178.89.65:37904.service: Deactivated successfully. Feb 13 20:11:54.380214 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:11:54.382217 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:11:54.384265 systemd-logind[1446]: Removed session 10. Feb 13 20:11:54.426932 systemd[1]: Started sshd@11-10.128.0.67:22-139.178.89.65:37908.service - OpenSSH per-connection server daemon (139.178.89.65:37908). Feb 13 20:11:54.721788 sshd[5319]: Accepted publickey for core from 139.178.89.65 port 37908 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:11:54.724201 sshd[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:54.730717 systemd-logind[1446]: New session 11 of user core. Feb 13 20:11:54.739414 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:11:55.058007 sshd[5319]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:55.064411 systemd[1]: sshd@11-10.128.0.67:22-139.178.89.65:37908.service: Deactivated successfully. Feb 13 20:11:55.070288 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:11:55.073333 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:11:55.076813 systemd-logind[1446]: Removed session 11. Feb 13 20:11:55.113742 systemd[1]: Started sshd@12-10.128.0.67:22-139.178.89.65:55238.service - OpenSSH per-connection server daemon (139.178.89.65:55238). Feb 13 20:11:55.413525 sshd[5330]: Accepted publickey for core from 139.178.89.65 port 55238 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:11:55.416547 sshd[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:11:55.424532 systemd-logind[1446]: New session 12 of user core. Feb 13 20:11:55.430632 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:11:55.705439 sshd[5330]: pam_unix(sshd:session): session closed for user core Feb 13 20:11:55.711153 systemd[1]: sshd@12-10.128.0.67:22-139.178.89.65:55238.service: Deactivated successfully. Feb 13 20:11:55.714878 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:11:55.717301 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:11:55.719331 systemd-logind[1446]: Removed session 12. Feb 13 20:11:58.610481 kubelet[2611]: I0213 20:11:58.609916 2611 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:12:00.765562 systemd[1]: Started sshd@13-10.128.0.67:22-139.178.89.65:55250.service - OpenSSH per-connection server daemon (139.178.89.65:55250). Feb 13 20:12:01.057412 sshd[5366]: Accepted publickey for core from 139.178.89.65 port 55250 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:01.059672 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:01.067066 systemd-logind[1446]: New session 13 of user core. Feb 13 20:12:01.073455 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:12:01.355947 sshd[5366]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:01.363974 systemd[1]: sshd@13-10.128.0.67:22-139.178.89.65:55250.service: Deactivated successfully. Feb 13 20:12:01.367056 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:12:01.368730 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:12:01.370644 systemd-logind[1446]: Removed session 13. Feb 13 20:12:02.741151 containerd[1476]: time="2025-02-13T20:12:02.740899780Z" level=info msg="StopContainer for \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\" with timeout 300 (s)" Feb 13 20:12:02.741818 containerd[1476]: time="2025-02-13T20:12:02.741787510Z" level=info msg="Stop container \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\" with signal terminated" Feb 13 20:12:03.024002 containerd[1476]: time="2025-02-13T20:12:03.023384017Z" level=info msg="StopContainer for \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\" with timeout 30 (s)" Feb 13 20:12:03.026003 containerd[1476]: time="2025-02-13T20:12:03.025240257Z" level=info msg="Stop container \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\" with signal terminated" Feb 13 20:12:03.048459 systemd[1]: cri-containerd-2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489.scope: Deactivated successfully. Feb 13 20:12:03.106635 containerd[1476]: time="2025-02-13T20:12:03.106257812Z" level=info msg="shim disconnected" id=2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489 namespace=k8s.io Feb 13 20:12:03.106635 containerd[1476]: time="2025-02-13T20:12:03.106426690Z" level=warning msg="cleaning up after shim disconnected" id=2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489 namespace=k8s.io Feb 13 20:12:03.108791 containerd[1476]: time="2025-02-13T20:12:03.106443874Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:03.115235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489-rootfs.mount: Deactivated successfully. Feb 13 20:12:03.219116 containerd[1476]: time="2025-02-13T20:12:03.219033700Z" level=info msg="StopContainer for \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\" returns successfully" Feb 13 20:12:03.220031 containerd[1476]: time="2025-02-13T20:12:03.220003307Z" level=info msg="StopPodSandbox for \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\"" Feb 13 20:12:03.220391 containerd[1476]: time="2025-02-13T20:12:03.220162792Z" level=info msg="Container to stop \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:12:03.232449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91-shm.mount: Deactivated successfully. Feb 13 20:12:03.240952 systemd[1]: cri-containerd-1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91.scope: Deactivated successfully. Feb 13 20:12:03.283483 containerd[1476]: time="2025-02-13T20:12:03.283310666Z" level=info msg="shim disconnected" id=1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91 namespace=k8s.io Feb 13 20:12:03.283483 containerd[1476]: time="2025-02-13T20:12:03.283395988Z" level=warning msg="cleaning up after shim disconnected" id=1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91 namespace=k8s.io Feb 13 20:12:03.283483 containerd[1476]: time="2025-02-13T20:12:03.283429467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:03.290547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91-rootfs.mount: Deactivated successfully. Feb 13 20:12:03.388217 systemd-networkd[1385]: califfb9fe776a6: Link DOWN Feb 13 20:12:03.390585 systemd-networkd[1385]: califfb9fe776a6: Lost carrier Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.386 [INFO][5468] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.386 [INFO][5468] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" iface="eth0" netns="/var/run/netns/cni-1abec38a-b2bb-e229-2ef8-e28929d5efd3" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.386 [INFO][5468] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" iface="eth0" netns="/var/run/netns/cni-1abec38a-b2bb-e229-2ef8-e28929d5efd3" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.400 [INFO][5468] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" after=13.800041ms iface="eth0" netns="/var/run/netns/cni-1abec38a-b2bb-e229-2ef8-e28929d5efd3" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.400 [INFO][5468] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.400 [INFO][5468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.478 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.478 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.479 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.520 [INFO][5477] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.521 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.522 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:12:03.526216 containerd[1476]: 2025-02-13 20:12:03.524 [INFO][5468] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:03.528291 containerd[1476]: time="2025-02-13T20:12:03.528244806Z" level=info msg="TearDown network for sandbox \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\" successfully" Feb 13 20:12:03.528291 containerd[1476]: time="2025-02-13T20:12:03.528290142Z" level=info msg="StopPodSandbox for \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\" returns successfully" Feb 13 20:12:03.533923 systemd[1]: run-netns-cni\x2d1abec38a\x2db2bb\x2de229\x2d2ef8\x2de28929d5efd3.mount: Deactivated successfully. Feb 13 20:12:03.645646 kubelet[2611]: I0213 20:12:03.645548 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtzb2\" (UniqueName: \"kubernetes.io/projected/d9574409-d923-4274-a785-828313eee44c-kube-api-access-gtzb2\") pod \"d9574409-d923-4274-a785-828313eee44c\" (UID: \"d9574409-d923-4274-a785-828313eee44c\") " Feb 13 20:12:03.645646 kubelet[2611]: I0213 20:12:03.645646 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9574409-d923-4274-a785-828313eee44c-tigera-ca-bundle\") pod \"d9574409-d923-4274-a785-828313eee44c\" (UID: \"d9574409-d923-4274-a785-828313eee44c\") " Feb 13 20:12:03.661551 kubelet[2611]: I0213 20:12:03.661434 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9574409-d923-4274-a785-828313eee44c-kube-api-access-gtzb2" (OuterVolumeSpecName: "kube-api-access-gtzb2") pod "d9574409-d923-4274-a785-828313eee44c" (UID: "d9574409-d923-4274-a785-828313eee44c"). InnerVolumeSpecName "kube-api-access-gtzb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:12:03.663464 kubelet[2611]: I0213 20:12:03.663388 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9574409-d923-4274-a785-828313eee44c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d9574409-d923-4274-a785-828313eee44c" (UID: "d9574409-d923-4274-a785-828313eee44c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:12:03.664424 systemd[1]: var-lib-kubelet-pods-d9574409\x2dd923\x2d4274\x2da785\x2d828313eee44c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Feb 13 20:12:03.664686 systemd[1]: var-lib-kubelet-pods-d9574409\x2dd923\x2d4274\x2da785\x2d828313eee44c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgtzb2.mount: Deactivated successfully. Feb 13 20:12:03.746780 kubelet[2611]: I0213 20:12:03.746713 2611 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9574409-d923-4274-a785-828313eee44c-tigera-ca-bundle\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:03.746780 kubelet[2611]: I0213 20:12:03.746766 2611 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gtzb2\" (UniqueName: \"kubernetes.io/projected/d9574409-d923-4274-a785-828313eee44c-kube-api-access-gtzb2\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:03.921536 kubelet[2611]: I0213 20:12:03.921473 2611 scope.go:117] "RemoveContainer" containerID="2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489" Feb 13 20:12:03.925120 containerd[1476]: time="2025-02-13T20:12:03.924995050Z" level=info msg="RemoveContainer for \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\"" Feb 13 20:12:03.935909 containerd[1476]: time="2025-02-13T20:12:03.935834566Z" level=info msg="RemoveContainer for \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\" returns successfully" Feb 13 20:12:03.936693 kubelet[2611]: I0213 20:12:03.936605 2611 scope.go:117] "RemoveContainer" containerID="2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489" Feb 13 20:12:03.937162 containerd[1476]: time="2025-02-13T20:12:03.937110358Z" level=error msg="ContainerStatus for \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\": not found" Feb 13 20:12:03.938135 kubelet[2611]: E0213 20:12:03.937314 2611 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\": not found" containerID="2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489" Feb 13 20:12:03.938135 kubelet[2611]: I0213 20:12:03.937356 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489"} err="failed to get container status \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d1adf144f68c55733c295101089f2b62fc49c0b6b2b258169e3a9edede3f489\": not found" Feb 13 20:12:03.944031 systemd[1]: Removed slice kubepods-besteffort-podd9574409_d923_4274_a785_828313eee44c.slice - libcontainer container kubepods-besteffort-podd9574409_d923_4274_a785_828313eee44c.slice. Feb 13 20:12:05.412926 kubelet[2611]: I0213 20:12:05.412851 2611 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9574409-d923-4274-a785-828313eee44c" path="/var/lib/kubelet/pods/d9574409-d923-4274-a785-828313eee44c/volumes" Feb 13 20:12:06.057868 ntpd[1436]: Deleting interface #10 califfb9fe776a6, fe80::ecee:eeff:feee:eeee%8#123, interface stats: received=0, sent=0, dropped=0, active_time=31 secs Feb 13 20:12:06.058443 ntpd[1436]: 13 Feb 20:12:06 ntpd[1436]: Deleting interface #10 califfb9fe776a6, fe80::ecee:eeff:feee:eeee%8#123, interface stats: received=0, sent=0, dropped=0, active_time=31 secs Feb 13 20:12:06.419578 systemd[1]: Started sshd@14-10.128.0.67:22-139.178.89.65:39154.service - OpenSSH per-connection server daemon (139.178.89.65:39154). Feb 13 20:12:06.731263 sshd[5539]: Accepted publickey for core from 139.178.89.65 port 39154 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:06.734290 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:06.742738 systemd-logind[1446]: New session 14 of user core. Feb 13 20:12:06.751489 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:12:07.079883 sshd[5539]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:07.090630 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:12:07.092145 systemd[1]: sshd@14-10.128.0.67:22-139.178.89.65:39154.service: Deactivated successfully. Feb 13 20:12:07.097796 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:12:07.099504 systemd-logind[1446]: Removed session 14. Feb 13 20:12:07.488109 systemd[1]: cri-containerd-319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43.scope: Deactivated successfully. Feb 13 20:12:07.528727 containerd[1476]: time="2025-02-13T20:12:07.528025198Z" level=info msg="shim disconnected" id=319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43 namespace=k8s.io Feb 13 20:12:07.528727 containerd[1476]: time="2025-02-13T20:12:07.528150319Z" level=warning msg="cleaning up after shim disconnected" id=319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43 namespace=k8s.io Feb 13 20:12:07.528727 containerd[1476]: time="2025-02-13T20:12:07.528167053Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:07.538703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43-rootfs.mount: Deactivated successfully. Feb 13 20:12:07.574576 containerd[1476]: time="2025-02-13T20:12:07.574496190Z" level=info msg="StopContainer for \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\" returns successfully" Feb 13 20:12:07.575469 containerd[1476]: time="2025-02-13T20:12:07.575427714Z" level=info msg="StopPodSandbox for \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\"" Feb 13 20:12:07.575633 containerd[1476]: time="2025-02-13T20:12:07.575551384Z" level=info msg="Container to stop \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:12:07.583747 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a-shm.mount: Deactivated successfully. Feb 13 20:12:07.592172 systemd[1]: cri-containerd-e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a.scope: Deactivated successfully. Feb 13 20:12:07.629513 containerd[1476]: time="2025-02-13T20:12:07.629288096Z" level=info msg="shim disconnected" id=e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a namespace=k8s.io Feb 13 20:12:07.629513 containerd[1476]: time="2025-02-13T20:12:07.629392700Z" level=warning msg="cleaning up after shim disconnected" id=e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a namespace=k8s.io Feb 13 20:12:07.629513 containerd[1476]: time="2025-02-13T20:12:07.629414338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:07.634567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a-rootfs.mount: Deactivated successfully. Feb 13 20:12:07.661434 containerd[1476]: time="2025-02-13T20:12:07.661321603Z" level=info msg="TearDown network for sandbox \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\" successfully" Feb 13 20:12:07.661434 containerd[1476]: time="2025-02-13T20:12:07.661372876Z" level=info msg="StopPodSandbox for \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\" returns successfully" Feb 13 20:12:07.776654 kubelet[2611]: I0213 20:12:07.776471 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znwr4\" (UniqueName: \"kubernetes.io/projected/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-kube-api-access-znwr4\") pod \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\" (UID: \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\") " Feb 13 20:12:07.776654 kubelet[2611]: I0213 20:12:07.776552 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-typha-certs\") pod \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\" (UID: \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\") " Feb 13 20:12:07.776654 kubelet[2611]: I0213 20:12:07.776592 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-tigera-ca-bundle\") pod \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\" (UID: \"6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e\") " Feb 13 20:12:07.784145 kubelet[2611]: I0213 20:12:07.783707 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-kube-api-access-znwr4" (OuterVolumeSpecName: "kube-api-access-znwr4") pod "6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e" (UID: "6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e"). InnerVolumeSpecName "kube-api-access-znwr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:12:07.784451 kubelet[2611]: I0213 20:12:07.784282 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e" (UID: "6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:12:07.791122 kubelet[2611]: I0213 20:12:07.789261 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e" (UID: "6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:12:07.790914 systemd[1]: var-lib-kubelet-pods-6d4bc3f3\x2ddc41\x2d4720\x2dbd5b\x2ddfa158e88e7e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 13 20:12:07.798214 systemd[1]: var-lib-kubelet-pods-6d4bc3f3\x2ddc41\x2d4720\x2dbd5b\x2ddfa158e88e7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dznwr4.mount: Deactivated successfully. Feb 13 20:12:07.798406 systemd[1]: var-lib-kubelet-pods-6d4bc3f3\x2ddc41\x2d4720\x2dbd5b\x2ddfa158e88e7e-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 13 20:12:07.877201 kubelet[2611]: I0213 20:12:07.877136 2611 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-znwr4\" (UniqueName: \"kubernetes.io/projected/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-kube-api-access-znwr4\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:07.877201 kubelet[2611]: I0213 20:12:07.877193 2611 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-typha-certs\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:07.877201 kubelet[2611]: I0213 20:12:07.877211 2611 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e-tigera-ca-bundle\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:07.940700 kubelet[2611]: I0213 20:12:07.940545 2611 scope.go:117] "RemoveContainer" containerID="319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43" Feb 13 20:12:07.945412 containerd[1476]: time="2025-02-13T20:12:07.944931904Z" level=info msg="RemoveContainer for \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\"" Feb 13 20:12:07.950263 containerd[1476]: time="2025-02-13T20:12:07.950218542Z" level=info msg="RemoveContainer for \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\" returns successfully" Feb 13 20:12:07.950693 kubelet[2611]: I0213 20:12:07.950662 2611 scope.go:117] "RemoveContainer" containerID="319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43" Feb 13 20:12:07.951053 systemd[1]: Removed slice kubepods-besteffort-pod6d4bc3f3_dc41_4720_bd5b_dfa158e88e7e.slice - libcontainer container kubepods-besteffort-pod6d4bc3f3_dc41_4720_bd5b_dfa158e88e7e.slice. Feb 13 20:12:07.952272 kubelet[2611]: E0213 20:12:07.951495 2611 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\": not found" containerID="319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43" Feb 13 20:12:07.952272 kubelet[2611]: I0213 20:12:07.951534 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43"} err="failed to get container status \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\": rpc error: code = NotFound desc = an error occurred when try to find container \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\": not found" Feb 13 20:12:07.952504 containerd[1476]: time="2025-02-13T20:12:07.951325337Z" level=error msg="ContainerStatus for \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"319952c7b2120dc4b10afcea15f327c635bd15e04808176fbef592262559af43\": not found" Feb 13 20:12:09.412222 kubelet[2611]: I0213 20:12:09.412155 2611 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e" path="/var/lib/kubelet/pods/6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e/volumes" Feb 13 20:12:12.134629 systemd[1]: Started sshd@15-10.128.0.67:22-139.178.89.65:39160.service - OpenSSH per-connection server daemon (139.178.89.65:39160). Feb 13 20:12:12.443170 sshd[5723]: Accepted publickey for core from 139.178.89.65 port 39160 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:12.446071 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:12.454170 systemd-logind[1446]: New session 15 of user core. Feb 13 20:12:12.457334 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:12:12.744775 sshd[5723]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:12.751957 systemd[1]: sshd@15-10.128.0.67:22-139.178.89.65:39160.service: Deactivated successfully. Feb 13 20:12:12.755604 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:12:12.757366 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:12:12.759280 systemd-logind[1446]: Removed session 15. Feb 13 20:12:17.805552 systemd[1]: Started sshd@16-10.128.0.67:22-139.178.89.65:55064.service - OpenSSH per-connection server daemon (139.178.89.65:55064). Feb 13 20:12:18.109966 sshd[5869]: Accepted publickey for core from 139.178.89.65 port 55064 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:18.112014 sshd[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:18.120000 systemd-logind[1446]: New session 16 of user core. Feb 13 20:12:18.133480 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:12:18.229354 systemd[1]: run-containerd-runc-k8s.io-0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3-runc.JQCADw.mount: Deactivated successfully. Feb 13 20:12:18.422515 sshd[5869]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:18.428359 systemd[1]: sshd@16-10.128.0.67:22-139.178.89.65:55064.service: Deactivated successfully. Feb 13 20:12:18.432862 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:12:18.435864 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:12:18.438056 systemd-logind[1446]: Removed session 16. Feb 13 20:12:18.477576 systemd[1]: Started sshd@17-10.128.0.67:22-139.178.89.65:55080.service - OpenSSH per-connection server daemon (139.178.89.65:55080). Feb 13 20:12:18.770774 sshd[5925]: Accepted publickey for core from 139.178.89.65 port 55080 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:18.773211 sshd[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:18.781230 systemd-logind[1446]: New session 17 of user core. Feb 13 20:12:18.787383 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:12:19.227002 sshd[5925]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:19.236418 systemd[1]: sshd@17-10.128.0.67:22-139.178.89.65:55080.service: Deactivated successfully. Feb 13 20:12:19.236987 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:12:19.244306 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:12:19.251609 systemd-logind[1446]: Removed session 17. Feb 13 20:12:19.294288 systemd[1]: Started sshd@18-10.128.0.67:22-139.178.89.65:55092.service - OpenSSH per-connection server daemon (139.178.89.65:55092). Feb 13 20:12:19.603237 sshd[5955]: Accepted publickey for core from 139.178.89.65 port 55092 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:19.605832 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:19.614403 systemd-logind[1446]: New session 18 of user core. Feb 13 20:12:19.620377 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:12:22.002340 sshd[5955]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:22.016304 systemd[1]: sshd@18-10.128.0.67:22-139.178.89.65:55092.service: Deactivated successfully. Feb 13 20:12:22.024348 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:12:22.030950 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:12:22.033817 systemd-logind[1446]: Removed session 18. Feb 13 20:12:22.070252 systemd[1]: Started sshd@19-10.128.0.67:22-139.178.89.65:55094.service - OpenSSH per-connection server daemon (139.178.89.65:55094). Feb 13 20:12:22.392345 sshd[5994]: Accepted publickey for core from 139.178.89.65 port 55094 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:22.394623 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:22.402143 systemd-logind[1446]: New session 19 of user core. Feb 13 20:12:22.407368 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:12:22.844423 sshd[5994]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:22.851508 systemd[1]: sshd@19-10.128.0.67:22-139.178.89.65:55094.service: Deactivated successfully. Feb 13 20:12:22.856167 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:12:22.857895 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:12:22.860029 systemd-logind[1446]: Removed session 19. Feb 13 20:12:22.899638 systemd[1]: Started sshd@20-10.128.0.67:22-139.178.89.65:55096.service - OpenSSH per-connection server daemon (139.178.89.65:55096). Feb 13 20:12:23.213004 sshd[6024]: Accepted publickey for core from 139.178.89.65 port 55096 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:23.215307 sshd[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:23.223021 systemd-logind[1446]: New session 20 of user core. Feb 13 20:12:23.231386 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:12:23.610123 sshd[6024]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:23.616723 systemd[1]: sshd@20-10.128.0.67:22-139.178.89.65:55096.service: Deactivated successfully. Feb 13 20:12:23.620639 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:12:23.621889 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:12:23.623929 systemd-logind[1446]: Removed session 20. Feb 13 20:12:28.667681 systemd[1]: Started sshd@21-10.128.0.67:22-139.178.89.65:42490.service - OpenSSH per-connection server daemon (139.178.89.65:42490). Feb 13 20:12:28.967244 sshd[6145]: Accepted publickey for core from 139.178.89.65 port 42490 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:28.969699 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:28.976982 systemd-logind[1446]: New session 21 of user core. Feb 13 20:12:28.982384 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:12:29.307740 sshd[6145]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:29.313127 systemd[1]: sshd@21-10.128.0.67:22-139.178.89.65:42490.service: Deactivated successfully. Feb 13 20:12:29.316571 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:12:29.319371 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:12:29.321492 systemd-logind[1446]: Removed session 21. Feb 13 20:12:34.367533 systemd[1]: Started sshd@22-10.128.0.67:22-139.178.89.65:42506.service - OpenSSH per-connection server daemon (139.178.89.65:42506). Feb 13 20:12:34.665665 sshd[6262]: Accepted publickey for core from 139.178.89.65 port 42506 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:34.668371 sshd[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:34.675943 systemd-logind[1446]: New session 22 of user core. Feb 13 20:12:34.682364 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:12:34.956969 sshd[6262]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:34.964347 systemd[1]: sshd@22-10.128.0.67:22-139.178.89.65:42506.service: Deactivated successfully. Feb 13 20:12:34.967837 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:12:34.969495 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:12:34.971686 systemd-logind[1446]: Removed session 22. Feb 13 20:12:40.014542 systemd[1]: Started sshd@23-10.128.0.67:22-139.178.89.65:35864.service - OpenSSH per-connection server daemon (139.178.89.65:35864). Feb 13 20:12:40.312117 sshd[6362]: Accepted publickey for core from 139.178.89.65 port 35864 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:12:40.314432 sshd[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:12:40.322210 systemd-logind[1446]: New session 23 of user core. Feb 13 20:12:40.327473 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:12:40.604545 sshd[6362]: pam_unix(sshd:session): session closed for user core Feb 13 20:12:40.610629 systemd[1]: sshd@23-10.128.0.67:22-139.178.89.65:35864.service: Deactivated successfully. Feb 13 20:12:40.614021 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:12:40.617174 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:12:40.619177 systemd-logind[1446]: Removed session 23. Feb 13 20:12:42.873338 containerd[1476]: time="2025-02-13T20:12:42.873274765Z" level=info msg="StopContainer for \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\" with timeout 5 (s)" Feb 13 20:12:42.875026 containerd[1476]: time="2025-02-13T20:12:42.873807956Z" level=info msg="Stop container \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\" with signal terminated" Feb 13 20:12:42.907177 systemd[1]: cri-containerd-0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3.scope: Deactivated successfully. Feb 13 20:12:42.907540 systemd[1]: cri-containerd-0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3.scope: Consumed 11.776s CPU time. Feb 13 20:12:42.943151 containerd[1476]: time="2025-02-13T20:12:42.943050076Z" level=info msg="shim disconnected" id=0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3 namespace=k8s.io Feb 13 20:12:42.943151 containerd[1476]: time="2025-02-13T20:12:42.943151120Z" level=warning msg="cleaning up after shim disconnected" id=0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3 namespace=k8s.io Feb 13 20:12:42.943443 containerd[1476]: time="2025-02-13T20:12:42.943166687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:42.947219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3-rootfs.mount: Deactivated successfully. Feb 13 20:12:42.987723 containerd[1476]: time="2025-02-13T20:12:42.987663580Z" level=info msg="StopContainer for \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\" returns successfully" Feb 13 20:12:42.988422 containerd[1476]: time="2025-02-13T20:12:42.988380129Z" level=info msg="StopPodSandbox for \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\"" Feb 13 20:12:42.988569 containerd[1476]: time="2025-02-13T20:12:42.988449920Z" level=info msg="Container to stop \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:12:42.988569 containerd[1476]: time="2025-02-13T20:12:42.988477307Z" level=info msg="Container to stop \"507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:12:42.988569 containerd[1476]: time="2025-02-13T20:12:42.988497618Z" level=info msg="Container to stop \"83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:12:42.995291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2-shm.mount: Deactivated successfully. Feb 13 20:12:43.004378 systemd[1]: cri-containerd-a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2.scope: Deactivated successfully. Feb 13 20:12:43.036729 containerd[1476]: time="2025-02-13T20:12:43.036641328Z" level=info msg="shim disconnected" id=a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2 namespace=k8s.io Feb 13 20:12:43.036729 containerd[1476]: time="2025-02-13T20:12:43.036729955Z" level=warning msg="cleaning up after shim disconnected" id=a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2 namespace=k8s.io Feb 13 20:12:43.041233 containerd[1476]: time="2025-02-13T20:12:43.036746703Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:12:43.041162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2-rootfs.mount: Deactivated successfully. Feb 13 20:12:43.070836 containerd[1476]: time="2025-02-13T20:12:43.070773525Z" level=info msg="TearDown network for sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" successfully" Feb 13 20:12:43.071253 containerd[1476]: time="2025-02-13T20:12:43.070862092Z" level=info msg="StopPodSandbox for \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" returns successfully" Feb 13 20:12:43.124432 kubelet[2611]: I0213 20:12:43.124238 2611 topology_manager.go:215] "Topology Admit Handler" podUID="31ce0196-28a7-4170-a266-fe1d5104ad24" podNamespace="calico-system" podName="calico-node-s2pnf" Feb 13 20:12:43.124432 kubelet[2611]: E0213 20:12:43.124380 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e" containerName="calico-typha" Feb 13 20:12:43.124432 kubelet[2611]: E0213 20:12:43.124398 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d9574409-d923-4274-a785-828313eee44c" containerName="calico-kube-controllers" Feb 13 20:12:43.124432 kubelet[2611]: E0213 20:12:43.124412 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e03ae09-2522-4225-9b0e-f0ca6b4697b9" containerName="calico-node" Feb 13 20:12:43.125275 kubelet[2611]: E0213 20:12:43.124425 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e03ae09-2522-4225-9b0e-f0ca6b4697b9" containerName="flexvol-driver" Feb 13 20:12:43.125275 kubelet[2611]: E0213 20:12:43.124457 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7e03ae09-2522-4225-9b0e-f0ca6b4697b9" containerName="install-cni" Feb 13 20:12:43.127427 kubelet[2611]: I0213 20:12:43.124507 2611 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e03ae09-2522-4225-9b0e-f0ca6b4697b9" containerName="calico-node" Feb 13 20:12:43.127427 kubelet[2611]: I0213 20:12:43.126317 2611 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4bc3f3-dc41-4720-bd5b-dfa158e88e7e" containerName="calico-typha" Feb 13 20:12:43.127427 kubelet[2611]: I0213 20:12:43.126544 2611 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9574409-d923-4274-a785-828313eee44c" containerName="calico-kube-controllers" Feb 13 20:12:43.144788 systemd[1]: Created slice kubepods-besteffort-pod31ce0196_28a7_4170_a266_fe1d5104ad24.slice - libcontainer container kubepods-besteffort-pod31ce0196_28a7_4170_a266_fe1d5104ad24.slice. Feb 13 20:12:43.231565 kubelet[2611]: I0213 20:12:43.231471 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-net-dir\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.231565 kubelet[2611]: I0213 20:12:43.231557 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-bin-dir\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.231905 kubelet[2611]: I0213 20:12:43.231597 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-tigera-ca-bundle\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.231905 kubelet[2611]: I0213 20:12:43.231620 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-log-dir\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.231905 kubelet[2611]: I0213 20:12:43.231653 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp94q\" (UniqueName: \"kubernetes.io/projected/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-kube-api-access-tp94q\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.231905 kubelet[2611]: I0213 20:12:43.231686 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-node-certs\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.231905 kubelet[2611]: I0213 20:12:43.231711 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-policysync\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.231905 kubelet[2611]: I0213 20:12:43.231740 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-var-lib-calico\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.232290 kubelet[2611]: I0213 20:12:43.231768 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-xtables-lock\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.232290 kubelet[2611]: I0213 20:12:43.231802 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-var-run-calico\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.232290 kubelet[2611]: I0213 20:12:43.231829 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-lib-modules\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.232290 kubelet[2611]: I0213 20:12:43.231862 2611 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-flexvol-driver-host\") pod \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\" (UID: \"7e03ae09-2522-4225-9b0e-f0ca6b4697b9\") " Feb 13 20:12:43.232290 kubelet[2611]: I0213 20:12:43.231972 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-lib-modules\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.232290 kubelet[2611]: I0213 20:12:43.232015 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31ce0196-28a7-4170-a266-fe1d5104ad24-tigera-ca-bundle\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.232644 kubelet[2611]: I0213 20:12:43.232068 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-cni-net-dir\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.232644 kubelet[2611]: I0213 20:12:43.232134 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-policysync\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.232644 kubelet[2611]: I0213 20:12:43.232169 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-var-lib-calico\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.232644 kubelet[2611]: I0213 20:12:43.232199 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-cni-log-dir\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.232644 kubelet[2611]: I0213 20:12:43.232232 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6qn2\" (UniqueName: \"kubernetes.io/projected/31ce0196-28a7-4170-a266-fe1d5104ad24-kube-api-access-c6qn2\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.235333 kubelet[2611]: I0213 20:12:43.232265 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/31ce0196-28a7-4170-a266-fe1d5104ad24-node-certs\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.235333 kubelet[2611]: I0213 20:12:43.232303 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-xtables-lock\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.235333 kubelet[2611]: I0213 20:12:43.232336 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-cni-bin-dir\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.235333 kubelet[2611]: I0213 20:12:43.232370 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-var-run-calico\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.235333 kubelet[2611]: I0213 20:12:43.232422 2611 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/31ce0196-28a7-4170-a266-fe1d5104ad24-flexvol-driver-host\") pod \"calico-node-s2pnf\" (UID: \"31ce0196-28a7-4170-a266-fe1d5104ad24\") " pod="calico-system/calico-node-s2pnf" Feb 13 20:12:43.235886 kubelet[2611]: I0213 20:12:43.232533 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.235886 kubelet[2611]: I0213 20:12:43.232601 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.235886 kubelet[2611]: I0213 20:12:43.232979 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.236533 kubelet[2611]: I0213 20:12:43.236492 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.237020 kubelet[2611]: I0213 20:12:43.236976 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.237285 kubelet[2611]: I0213 20:12:43.237260 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.237503 kubelet[2611]: I0213 20:12:43.237366 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.237734 kubelet[2611]: I0213 20:12:43.237612 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-policysync" (OuterVolumeSpecName: "policysync") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.240164 kubelet[2611]: I0213 20:12:43.240125 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:12:43.241419 kubelet[2611]: I0213 20:12:43.241311 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:12:43.245333 systemd[1]: var-lib-kubelet-pods-7e03ae09\x2d2522\x2d4225\x2d9b0e\x2df0ca6b4697b9-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Feb 13 20:12:43.248058 kubelet[2611]: I0213 20:12:43.247992 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-kube-api-access-tp94q" (OuterVolumeSpecName: "kube-api-access-tp94q") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "kube-api-access-tp94q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:12:43.249030 kubelet[2611]: I0213 20:12:43.248985 2611 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-node-certs" (OuterVolumeSpecName: "node-certs") pod "7e03ae09-2522-4225-9b0e-f0ca6b4697b9" (UID: "7e03ae09-2522-4225-9b0e-f0ca6b4697b9"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:12:43.258225 kubelet[2611]: I0213 20:12:43.258168 2611 scope.go:117] "RemoveContainer" containerID="507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326" Feb 13 20:12:43.260205 containerd[1476]: time="2025-02-13T20:12:43.260121206Z" level=info msg="RemoveContainer for \"507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326\"" Feb 13 20:12:43.265619 containerd[1476]: time="2025-02-13T20:12:43.265562553Z" level=info msg="RemoveContainer for \"507a6031d4e27d43d9c23aad841556ddcb7bb3f3ae2aaf8840358e597db21326\" returns successfully" Feb 13 20:12:43.265948 kubelet[2611]: I0213 20:12:43.265810 2611 scope.go:117] "RemoveContainer" containerID="83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730" Feb 13 20:12:43.267489 containerd[1476]: time="2025-02-13T20:12:43.267453520Z" level=info msg="RemoveContainer for \"83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730\"" Feb 13 20:12:43.272242 containerd[1476]: time="2025-02-13T20:12:43.272158366Z" level=info msg="RemoveContainer for \"83d2c694b25bb8364529dd6fbc22ac37ba519c07c5f8a627b856b2c95c0d1730\" returns successfully" Feb 13 20:12:43.272414 kubelet[2611]: I0213 20:12:43.272381 2611 scope.go:117] "RemoveContainer" containerID="0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3" Feb 13 20:12:43.274297 containerd[1476]: time="2025-02-13T20:12:43.274240264Z" level=info msg="RemoveContainer for \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\"" Feb 13 20:12:43.278923 containerd[1476]: time="2025-02-13T20:12:43.278867291Z" level=info msg="RemoveContainer for \"0a6142e4bfd1b11dd99a93a76d63ccd5915c5e168d1d1927dfa345ce356146a3\" returns successfully" Feb 13 20:12:43.280747 containerd[1476]: time="2025-02-13T20:12:43.280472601Z" level=info msg="StopPodSandbox for \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\"" Feb 13 20:12:43.280747 containerd[1476]: time="2025-02-13T20:12:43.280615381Z" level=info msg="TearDown network for sandbox \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\" successfully" Feb 13 20:12:43.280747 containerd[1476]: time="2025-02-13T20:12:43.280638364Z" level=info msg="StopPodSandbox for \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\" returns successfully" Feb 13 20:12:43.281106 containerd[1476]: time="2025-02-13T20:12:43.281036002Z" level=info msg="RemovePodSandbox for \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\"" Feb 13 20:12:43.281201 containerd[1476]: time="2025-02-13T20:12:43.281077806Z" level=info msg="Forcibly stopping sandbox \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\"" Feb 13 20:12:43.281275 containerd[1476]: time="2025-02-13T20:12:43.281193858Z" level=info msg="TearDown network for sandbox \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\" successfully" Feb 13 20:12:43.285987 containerd[1476]: time="2025-02-13T20:12:43.285918134Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:12:43.286188 containerd[1476]: time="2025-02-13T20:12:43.286002498Z" level=info msg="RemovePodSandbox \"e85ff50b51e839e687d21df0156d3259c56621ff7b5fd89333e77aba2ed3b56a\" returns successfully" Feb 13 20:12:43.286678 containerd[1476]: time="2025-02-13T20:12:43.286512669Z" level=info msg="StopPodSandbox for \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\"" Feb 13 20:12:43.337181 kubelet[2611]: I0213 20:12:43.336508 2611 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-lib-modules\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.339813 kubelet[2611]: I0213 20:12:43.338265 2611 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-flexvol-driver-host\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.339813 kubelet[2611]: I0213 20:12:43.338302 2611 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-bin-dir\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.339813 kubelet[2611]: I0213 20:12:43.338325 2611 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-net-dir\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.339813 kubelet[2611]: I0213 20:12:43.338346 2611 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tp94q\" (UniqueName: \"kubernetes.io/projected/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-kube-api-access-tp94q\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.339813 kubelet[2611]: I0213 20:12:43.338369 2611 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-tigera-ca-bundle\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.339813 kubelet[2611]: I0213 20:12:43.338393 2611 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-cni-log-dir\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.339813 kubelet[2611]: I0213 20:12:43.338417 2611 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-policysync\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.340468 kubelet[2611]: I0213 20:12:43.338441 2611 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-node-certs\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.340468 kubelet[2611]: I0213 20:12:43.338472 2611 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-var-lib-calico\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.340468 kubelet[2611]: I0213 20:12:43.338489 2611 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-xtables-lock\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.340468 kubelet[2611]: I0213 20:12:43.338507 2611 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7e03ae09-2522-4225-9b0e-f0ca6b4697b9-var-run-calico\") on node \"ci-4081-3-1-2a20b0abae64c94e0d5e.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.333 [WARNING][6531] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.333 [INFO][6531] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.333 [INFO][6531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" iface="eth0" netns="" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.333 [INFO][6531] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.333 [INFO][6531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.385 [INFO][6537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.385 [INFO][6537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.385 [INFO][6537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.394 [WARNING][6537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.394 [INFO][6537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.396 [INFO][6537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:12:43.400362 containerd[1476]: 2025-02-13 20:12:43.398 [INFO][6531] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:43.400362 containerd[1476]: time="2025-02-13T20:12:43.400332913Z" level=info msg="TearDown network for sandbox \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\" successfully" Feb 13 20:12:43.401765 containerd[1476]: time="2025-02-13T20:12:43.400374115Z" level=info msg="StopPodSandbox for \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\" returns successfully" Feb 13 20:12:43.403138 containerd[1476]: time="2025-02-13T20:12:43.402630744Z" level=info msg="RemovePodSandbox for \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\"" Feb 13 20:12:43.403138 containerd[1476]: time="2025-02-13T20:12:43.402703307Z" level=info msg="Forcibly stopping sandbox \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\"" Feb 13 20:12:43.425192 systemd[1]: Removed slice kubepods-besteffort-pod7e03ae09_2522_4225_9b0e_f0ca6b4697b9.slice - libcontainer container kubepods-besteffort-pod7e03ae09_2522_4225_9b0e_f0ca6b4697b9.slice. Feb 13 20:12:43.425645 systemd[1]: kubepods-besteffort-pod7e03ae09_2522_4225_9b0e_f0ca6b4697b9.slice: Consumed 12.494s CPU time. Feb 13 20:12:43.450467 containerd[1476]: time="2025-02-13T20:12:43.450271491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s2pnf,Uid:31ce0196-28a7-4170-a266-fe1d5104ad24,Namespace:calico-system,Attempt:0,}" Feb 13 20:12:43.493016 containerd[1476]: time="2025-02-13T20:12:43.492577985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:12:43.493016 containerd[1476]: time="2025-02-13T20:12:43.492656622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:12:43.493016 containerd[1476]: time="2025-02-13T20:12:43.492676030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:43.493016 containerd[1476]: time="2025-02-13T20:12:43.492846030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:12:43.529119 systemd[1]: Started cri-containerd-2b6b1caf16b3e485f67b534c2c1418175144cfbf6863b6c6b2ac2327f09405e9.scope - libcontainer container 2b6b1caf16b3e485f67b534c2c1418175144cfbf6863b6c6b2ac2327f09405e9. Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.463 [WARNING][6557] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" WorkloadEndpoint="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.463 [INFO][6557] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.463 [INFO][6557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" iface="eth0" netns="" Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.463 [INFO][6557] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.463 [INFO][6557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.515 [INFO][6564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.515 [INFO][6564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.515 [INFO][6564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.526 [WARNING][6564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.526 [INFO][6564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" HandleID="k8s-pod-network.1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Workload="ci--4081--3--1--2a20b0abae64c94e0d5e.c.flatcar--212911.internal-k8s-calico--kube--controllers--84fb4d47d4--ffz4k-eth0" Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.528 [INFO][6564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:12:43.533569 containerd[1476]: 2025-02-13 20:12:43.530 [INFO][6557] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91" Feb 13 20:12:43.535436 containerd[1476]: time="2025-02-13T20:12:43.533678486Z" level=info msg="TearDown network for sandbox \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\" successfully" Feb 13 20:12:43.542224 containerd[1476]: time="2025-02-13T20:12:43.542020710Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:12:43.542224 containerd[1476]: time="2025-02-13T20:12:43.542183476Z" level=info msg="RemovePodSandbox \"1408bb1225567ea67cf7921e97ebc98daa6712943ea18493d03c65c7063e9d91\" returns successfully" Feb 13 20:12:43.543713 containerd[1476]: time="2025-02-13T20:12:43.543223268Z" level=info msg="StopPodSandbox for \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\"" Feb 13 20:12:43.543713 containerd[1476]: time="2025-02-13T20:12:43.543341252Z" level=info msg="TearDown network for sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" successfully" Feb 13 20:12:43.543713 containerd[1476]: time="2025-02-13T20:12:43.543364090Z" level=info msg="StopPodSandbox for \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" returns successfully" Feb 13 20:12:43.544540 containerd[1476]: time="2025-02-13T20:12:43.544213701Z" level=info msg="RemovePodSandbox for \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\"" Feb 13 20:12:43.544540 containerd[1476]: time="2025-02-13T20:12:43.544257388Z" level=info msg="Forcibly stopping sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\"" Feb 13 20:12:43.544540 containerd[1476]: time="2025-02-13T20:12:43.544343199Z" level=info msg="TearDown network for sandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" successfully" Feb 13 20:12:43.549625 containerd[1476]: time="2025-02-13T20:12:43.549392718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:12:43.549625 containerd[1476]: time="2025-02-13T20:12:43.549480941Z" level=info msg="RemovePodSandbox \"a5ecd96afb7947c5419d9864cead94d33fe8140d38573ab3d84e98b5ec6fadc2\" returns successfully" Feb 13 20:12:43.572123 containerd[1476]: time="2025-02-13T20:12:43.572051674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s2pnf,Uid:31ce0196-28a7-4170-a266-fe1d5104ad24,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b6b1caf16b3e485f67b534c2c1418175144cfbf6863b6c6b2ac2327f09405e9\"" Feb 13 20:12:43.578490 containerd[1476]: time="2025-02-13T20:12:43.578360331Z" level=info msg="CreateContainer within sandbox \"2b6b1caf16b3e485f67b534c2c1418175144cfbf6863b6c6b2ac2327f09405e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:12:43.597118 containerd[1476]: time="2025-02-13T20:12:43.597033075Z" level=info msg="CreateContainer within sandbox \"2b6b1caf16b3e485f67b534c2c1418175144cfbf6863b6c6b2ac2327f09405e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8af65256d0f7277fbc9fb0fe273bfa6e01c9586aec87f52a6f08e5524fa3f71e\"" Feb 13 20:12:43.599399 containerd[1476]: time="2025-02-13T20:12:43.597690668Z" level=info msg="StartContainer for \"8af65256d0f7277fbc9fb0fe273bfa6e01c9586aec87f52a6f08e5524fa3f71e\"" Feb 13 20:12:43.647380 systemd[1]: Started cri-containerd-8af65256d0f7277fbc9fb0fe273bfa6e01c9586aec87f52a6f08e5524fa3f71e.scope - libcontainer container 8af65256d0f7277fbc9fb0fe273bfa6e01c9586aec87f52a6f08e5524fa3f71e. Feb 13 20:12:43.693159 containerd[1476]: time="2025-02-13T20:12:43.691345930Z" level=info msg="StartContainer for \"8af65256d0f7277fbc9fb0fe273bfa6e01c9586aec87f52a6f08e5524fa3f71e\" returns successfully"