Feb 13 19:51:29.090830 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:51:29.090880 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:29.090897 kernel: BIOS-provided physical RAM map: Feb 13 19:51:29.090911 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 19:51:29.090923 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 19:51:29.090936 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 19:51:29.090952 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 19:51:29.090971 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 19:51:29.090984 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd327fff] usable Feb 13 19:51:29.090999 kernel: BIOS-e820: [mem 0x00000000bd328000-0x00000000bd330fff] ACPI data Feb 13 19:51:29.091013 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 19:51:29.091026 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 19:51:29.091041 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 19:51:29.091053 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 19:51:29.091073 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 19:51:29.091087 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 19:51:29.091103 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 19:51:29.091117 kernel: NX (Execute Disable) protection: active Feb 13 19:51:29.091130 kernel: APIC: Static calls initialized Feb 13 19:51:29.091145 kernel: efi: EFI v2.7 by EDK II Feb 13 19:51:29.091161 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd328018 Feb 13 19:51:29.091178 kernel: random: crng init done Feb 13 19:51:29.091193 kernel: secureboot: Secure boot disabled Feb 13 19:51:29.091208 kernel: SMBIOS 2.4 present. Feb 13 19:51:29.091229 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 19:51:29.091245 kernel: Hypervisor detected: KVM Feb 13 19:51:29.091262 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:51:29.091279 kernel: kvm-clock: using sched offset of 13451446801 cycles Feb 13 19:51:29.091296 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:51:29.091314 kernel: tsc: Detected 2299.998 MHz processor Feb 13 19:51:29.091331 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:51:29.091349 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:51:29.091366 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 19:51:29.091387 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 19:51:29.091405 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:51:29.091421 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 19:51:29.091438 kernel: Using GB pages for direct mapping Feb 13 19:51:29.091455 kernel: ACPI: Early table checksum verification disabled Feb 13 19:51:29.091471 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 19:51:29.091489 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 19:51:29.091513 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 19:51:29.091535 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 19:51:29.091553 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 19:51:29.091570 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 19:51:29.091589 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 19:51:29.091607 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 19:51:29.091624 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 19:51:29.091645 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 19:51:29.091662 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 19:51:29.091691 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 19:51:29.091709 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 19:51:29.091726 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 19:51:29.093198 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 19:51:29.093219 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 19:51:29.093238 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 19:51:29.093256 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 19:51:29.093282 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 19:51:29.093300 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 19:51:29.093319 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:51:29.093337 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:51:29.093355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 19:51:29.093374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 19:51:29.093392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 19:51:29.093410 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 19:51:29.093428 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 19:51:29.093451 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 13 19:51:29.093469 kernel: Zone ranges: Feb 13 19:51:29.093488 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:51:29.093505 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:51:29.093523 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:51:29.093541 kernel: Movable zone start for each node Feb 13 19:51:29.093560 kernel: Early memory node ranges Feb 13 19:51:29.093578 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 19:51:29.093596 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 19:51:29.093614 kernel: node 0: [mem 0x0000000000100000-0x00000000bd327fff] Feb 13 19:51:29.093636 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 19:51:29.093655 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 19:51:29.093679 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:51:29.093698 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 19:51:29.093716 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:51:29.093748 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 19:51:29.093775 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 19:51:29.093791 kernel: On node 0, zone DMA32: 9 pages in unavailable ranges Feb 13 19:51:29.093807 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:51:29.093828 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 19:51:29.093846 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:51:29.093864 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:51:29.093881 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:51:29.093899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:51:29.093917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:51:29.093934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:51:29.093952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:51:29.093970 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:51:29.093993 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:51:29.094011 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 19:51:29.094028 kernel: Booting paravirtualized kernel on KVM Feb 13 19:51:29.094048 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:51:29.094066 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:51:29.094084 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:51:29.094102 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:51:29.094120 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:51:29.094138 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:51:29.094159 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:51:29.094180 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:29.094199 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:51:29.094217 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 19:51:29.094236 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:51:29.094254 kernel: Fallback order for Node 0: 0 Feb 13 19:51:29.094272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932271 Feb 13 19:51:29.094290 kernel: Policy zone: Normal Feb 13 19:51:29.094313 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:51:29.094331 kernel: software IO TLB: area num 2. Feb 13 19:51:29.094351 kernel: Memory: 7513364K/7860548K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 346928K reserved, 0K cma-reserved) Feb 13 19:51:29.094368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:51:29.094387 kernel: Kernel/User page tables isolation: enabled Feb 13 19:51:29.094406 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:51:29.094424 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:51:29.094443 kernel: Dynamic Preempt: voluntary Feb 13 19:51:29.094479 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:51:29.094500 kernel: rcu: RCU event tracing is enabled. Feb 13 19:51:29.094520 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:51:29.094541 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:51:29.094564 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:51:29.094583 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:51:29.094602 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:51:29.094622 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:51:29.094658 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:51:29.094690 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:51:29.094708 kernel: Console: colour dummy device 80x25 Feb 13 19:51:29.094902 kernel: printk: console [ttyS0] enabled Feb 13 19:51:29.094927 kernel: ACPI: Core revision 20230628 Feb 13 19:51:29.094945 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:51:29.094964 kernel: x2apic enabled Feb 13 19:51:29.094983 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:51:29.095001 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 19:51:29.095029 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:51:29.095055 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 19:51:29.095073 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 19:51:29.095094 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 19:51:29.095114 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:51:29.095132 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:51:29.095150 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:51:29.095168 kernel: Spectre V2 : Mitigation: IBRS Feb 13 19:51:29.095186 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:51:29.095207 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:51:29.095224 kernel: RETBleed: Mitigation: IBRS Feb 13 19:51:29.095243 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:51:29.095262 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 19:51:29.095289 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:51:29.095310 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 19:51:29.095328 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:51:29.095347 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:51:29.095365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:51:29.095388 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:51:29.095406 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:51:29.095425 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 19:51:29.095443 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:51:29.095462 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:51:29.095489 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:51:29.095509 kernel: landlock: Up and running. Feb 13 19:51:29.095529 kernel: SELinux: Initializing. Feb 13 19:51:29.095549 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:51:29.095572 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:51:29.095590 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 19:51:29.095608 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:29.095627 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:29.095646 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:29.095665 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 19:51:29.095722 kernel: signal: max sigframe size: 1776 Feb 13 19:51:29.095769 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:51:29.095789 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:51:29.095812 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:51:29.095830 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:51:29.095849 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:51:29.095867 kernel: .... node #0, CPUs: #1 Feb 13 19:51:29.095886 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:51:29.095913 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:51:29.095932 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:51:29.095951 kernel: smpboot: Max logical packages: 1 Feb 13 19:51:29.095974 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 19:51:29.095994 kernel: devtmpfs: initialized Feb 13 19:51:29.096013 kernel: x86/mm: Memory block size: 128MB Feb 13 19:51:29.096031 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 19:51:29.096050 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:51:29.096069 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:51:29.096087 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:51:29.096106 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:51:29.096125 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:51:29.096157 kernel: audit: type=2000 audit(1739476287.756:1): state=initialized audit_enabled=0 res=1 Feb 13 19:51:29.096176 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:51:29.096195 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:51:29.096215 kernel: cpuidle: using governor menu Feb 13 19:51:29.096235 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:51:29.096254 kernel: dca service started, version 1.12.1 Feb 13 19:51:29.096272 kernel: PCI: Using configuration type 1 for base access Feb 13 19:51:29.096290 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:51:29.096308 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:51:29.096331 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:51:29.096373 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:51:29.096393 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:51:29.096412 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:51:29.096430 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:51:29.096450 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:51:29.096470 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:51:29.096490 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:51:29.096510 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:51:29.096534 kernel: ACPI: Interpreter enabled Feb 13 19:51:29.096553 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:51:29.096573 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:51:29.096593 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:51:29.096612 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 19:51:29.096631 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:51:29.096651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:51:29.096957 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:51:29.097165 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:51:29.097386 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:51:29.097412 kernel: PCI host bridge to bus 0000:00 Feb 13 19:51:29.097589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:51:29.097799 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:51:29.097963 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:51:29.098122 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 19:51:29.098288 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:51:29.098484 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:51:29.098679 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 19:51:29.099233 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 19:51:29.099421 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:51:29.099608 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 19:51:29.099844 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 19:51:29.100026 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 19:51:29.100212 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:51:29.100392 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 19:51:29.100569 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 19:51:29.100777 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:51:29.100960 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 19:51:29.101147 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 19:51:29.101170 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:51:29.101189 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:51:29.101207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:51:29.101226 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:51:29.101244 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:51:29.101262 kernel: iommu: Default domain type: Translated Feb 13 19:51:29.101281 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:51:29.101299 kernel: efivars: Registered efivars operations Feb 13 19:51:29.101322 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:51:29.101341 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:51:29.101359 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 19:51:29.101377 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 19:51:29.101395 kernel: e820: reserve RAM buffer [mem 0xbd328000-0xbfffffff] Feb 13 19:51:29.101413 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 19:51:29.101431 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 19:51:29.101449 kernel: vgaarb: loaded Feb 13 19:51:29.101467 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:51:29.101490 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:51:29.101508 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:51:29.101527 kernel: pnp: PnP ACPI init Feb 13 19:51:29.101545 kernel: pnp: PnP ACPI: found 7 devices Feb 13 19:51:29.101564 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:51:29.101582 kernel: NET: Registered PF_INET protocol family Feb 13 19:51:29.101601 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:51:29.101620 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 19:51:29.101639 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:51:29.102757 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:51:29.102787 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:51:29.102805 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 19:51:29.102824 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:51:29.102842 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:51:29.102859 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:51:29.102877 kernel: NET: Registered PF_XDP protocol family Feb 13 19:51:29.103085 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:51:29.103263 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:51:29.103442 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:51:29.103615 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 19:51:29.104015 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:51:29.104048 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:51:29.104068 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:51:29.104086 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 19:51:29.104109 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:51:29.104126 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:51:29.104145 kernel: clocksource: Switched to clocksource tsc Feb 13 19:51:29.104164 kernel: Initialise system trusted keyrings Feb 13 19:51:29.104182 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 19:51:29.104201 kernel: Key type asymmetric registered Feb 13 19:51:29.104220 kernel: Asymmetric key parser 'x509' registered Feb 13 19:51:29.104238 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:51:29.104255 kernel: io scheduler mq-deadline registered Feb 13 19:51:29.104277 kernel: io scheduler kyber registered Feb 13 19:51:29.104295 kernel: io scheduler bfq registered Feb 13 19:51:29.104314 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:51:29.104335 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 19:51:29.104533 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 19:51:29.104557 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 19:51:29.104772 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 19:51:29.104798 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 19:51:29.104989 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 19:51:29.105021 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:51:29.105038 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:51:29.105056 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:51:29.105073 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 19:51:29.105092 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 19:51:29.105297 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 19:51:29.105327 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:51:29.105348 kernel: i8042: Warning: Keylock active Feb 13 19:51:29.105373 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:51:29.105392 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:51:29.105590 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:51:29.105802 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:51:29.105970 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:51:28 UTC (1739476288) Feb 13 19:51:29.106129 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:51:29.106152 kernel: intel_pstate: CPU model not supported Feb 13 19:51:29.106170 kernel: pstore: Using crash dump compression: deflate Feb 13 19:51:29.106194 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:51:29.106213 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:51:29.106231 kernel: Segment Routing with IPv6 Feb 13 19:51:29.106250 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:51:29.106268 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:51:29.106286 kernel: Key type dns_resolver registered Feb 13 19:51:29.106304 kernel: IPI shorthand broadcast: enabled Feb 13 19:51:29.106321 kernel: sched_clock: Marking stable (891004436, 171009140)->(1110276310, -48262734) Feb 13 19:51:29.106340 kernel: registered taskstats version 1 Feb 13 19:51:29.106362 kernel: Loading compiled-in X.509 certificates Feb 13 19:51:29.106380 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:51:29.106398 kernel: Key type .fscrypt registered Feb 13 19:51:29.106416 kernel: Key type fscrypt-provisioning registered Feb 13 19:51:29.106433 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:51:29.106449 kernel: ima: No architecture policies found Feb 13 19:51:29.106467 kernel: clk: Disabling unused clocks Feb 13 19:51:29.106486 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:51:29.106504 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:51:29.106525 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:51:29.106543 kernel: Run /init as init process Feb 13 19:51:29.106559 kernel: with arguments: Feb 13 19:51:29.106577 kernel: /init Feb 13 19:51:29.106595 kernel: with environment: Feb 13 19:51:29.106610 kernel: HOME=/ Feb 13 19:51:29.106626 kernel: TERM=linux Feb 13 19:51:29.106643 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:51:29.106660 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 19:51:29.106697 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:51:29.106719 systemd[1]: Detected virtualization google. Feb 13 19:51:29.106880 systemd[1]: Detected architecture x86-64. Feb 13 19:51:29.106900 systemd[1]: Running in initrd. Feb 13 19:51:29.106919 systemd[1]: No hostname configured, using default hostname. Feb 13 19:51:29.106937 systemd[1]: Hostname set to . Feb 13 19:51:29.106957 systemd[1]: Initializing machine ID from random generator. Feb 13 19:51:29.106982 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:51:29.107003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:29.107023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:29.107044 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:51:29.107065 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:29.107084 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:51:29.107103 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:51:29.107132 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:51:29.107170 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:51:29.107195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:29.107215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:29.107237 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:29.107262 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:29.107283 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:29.107304 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:29.107325 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:29.107346 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:29.107368 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:51:29.107392 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:51:29.107413 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:29.107434 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:29.107458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:29.107479 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:29.107499 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:51:29.107520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:29.107541 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:51:29.107561 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:51:29.107581 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:29.107603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:29.107623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:29.107648 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:29.107725 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 19:51:29.107785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:29.107803 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:51:29.107828 systemd-journald[184]: Journal started Feb 13 19:51:29.107862 systemd-journald[184]: Runtime Journal (/run/log/journal/304ea9aa763d4cf1b2983c672e5133ef) is 8.0M, max 148.7M, 140.7M free. Feb 13 19:51:29.112169 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:29.112745 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 19:51:29.125978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:29.137960 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:29.139981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:29.163147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:29.164828 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:51:29.167341 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:29.172090 kernel: Bridge firewalling registered Feb 13 19:51:29.173865 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 19:51:29.178041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:29.178662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:29.192278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:29.193091 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:29.216266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:29.225399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:29.230304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:29.240058 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:51:29.249265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:29.270676 dracut-cmdline[217]: dracut-dracut-053 Feb 13 19:51:29.275443 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:29.309176 systemd-resolved[219]: Positive Trust Anchors: Feb 13 19:51:29.309198 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:29.309270 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:29.316651 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 19:51:29.318600 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:29.324586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:29.385778 kernel: SCSI subsystem initialized Feb 13 19:51:29.397761 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:51:29.409765 kernel: iscsi: registered transport (tcp) Feb 13 19:51:29.433888 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:51:29.433994 kernel: QLogic iSCSI HBA Driver Feb 13 19:51:29.488156 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:29.493011 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:51:29.536777 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:51:29.536877 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:51:29.536905 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:51:29.586769 kernel: raid6: avx2x4 gen() 17805 MB/s Feb 13 19:51:29.607770 kernel: raid6: avx2x2 gen() 17717 MB/s Feb 13 19:51:29.633919 kernel: raid6: avx2x1 gen() 13562 MB/s Feb 13 19:51:29.633998 kernel: raid6: using algorithm avx2x4 gen() 17805 MB/s Feb 13 19:51:29.660789 kernel: raid6: .... xor() 6709 MB/s, rmw enabled Feb 13 19:51:29.660869 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:51:29.690779 kernel: xor: automatically using best checksumming function avx Feb 13 19:51:29.872773 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:51:29.887088 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:29.902029 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:29.938162 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 19:51:29.945396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:29.978019 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:51:30.017485 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Feb 13 19:51:30.056081 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:30.062194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:30.160069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:30.197034 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:51:30.249766 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:30.270703 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:30.300909 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:51:30.287998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:30.324039 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:51:30.329765 kernel: AES CTR mode by8 optimization enabled Feb 13 19:51:30.329855 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:51:30.356911 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:30.387793 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 19:51:30.401028 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:51:30.411168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:30.411368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:30.475921 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 19:51:30.522594 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 19:51:30.523511 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 19:51:30.523812 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 19:51:30.524049 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:51:30.524284 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:51:30.524313 kernel: GPT:17805311 != 25165823 Feb 13 19:51:30.524338 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:51:30.524363 kernel: GPT:17805311 != 25165823 Feb 13 19:51:30.524398 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:51:30.524429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:30.524454 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 19:51:30.476066 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:30.476294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:30.476522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:30.592688 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (447) Feb 13 19:51:30.516456 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:30.614942 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (459) Feb 13 19:51:30.539304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:30.560754 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:30.639005 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 19:51:30.653437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:30.677707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 19:51:30.705255 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 19:51:30.705510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 19:51:30.757248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:51:30.762033 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:51:30.794004 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:30.819200 disk-uuid[540]: Primary Header is updated. Feb 13 19:51:30.819200 disk-uuid[540]: Secondary Entries is updated. Feb 13 19:51:30.819200 disk-uuid[540]: Secondary Header is updated. Feb 13 19:51:30.837759 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:30.856517 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:30.882990 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:31.883349 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:31.883439 disk-uuid[541]: The operation has completed successfully. Feb 13 19:51:31.965037 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:51:31.965193 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:51:31.982974 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:51:32.025261 sh[563]: Success Feb 13 19:51:32.049771 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:51:32.143043 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:51:32.150978 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:51:32.174368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:51:32.230193 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:51:32.230287 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:32.230313 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:51:32.239621 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:51:32.252246 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:51:32.281796 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:51:32.291676 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:51:32.292698 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:51:32.299995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:51:32.362935 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:32.362978 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:32.363003 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:32.353206 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:51:32.387919 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:32.387964 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:32.405818 kernel: BTRFS info (device sda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:32.423585 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:51:32.441068 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:51:32.513938 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:32.520172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:32.635689 systemd-networkd[746]: lo: Link UP Feb 13 19:51:32.636177 systemd-networkd[746]: lo: Gained carrier Feb 13 19:51:32.638660 systemd-networkd[746]: Enumeration completed Feb 13 19:51:32.649360 ignition[690]: Ignition 2.20.0 Feb 13 19:51:32.638865 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:32.649370 ignition[690]: Stage: fetch-offline Feb 13 19:51:32.640283 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:32.649417 ignition[690]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:32.640290 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:32.649428 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:32.642627 systemd-networkd[746]: eth0: Link UP Feb 13 19:51:32.649565 ignition[690]: parsed url from cmdline: "" Feb 13 19:51:32.642634 systemd-networkd[746]: eth0: Gained carrier Feb 13 19:51:32.649572 ignition[690]: no config URL provided Feb 13 19:51:32.642649 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:32.649578 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:32.652877 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.69/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:51:32.649588 ignition[690]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:32.661314 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:32.649597 ignition[690]: failed to fetch config: resource requires networking Feb 13 19:51:32.679658 systemd[1]: Reached target network.target - Network. Feb 13 19:51:32.650018 ignition[690]: Ignition finished successfully Feb 13 19:51:32.699034 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:51:32.746254 ignition[757]: Ignition 2.20.0 Feb 13 19:51:32.755716 unknown[757]: fetched base config from "system" Feb 13 19:51:32.746263 ignition[757]: Stage: fetch Feb 13 19:51:32.755750 unknown[757]: fetched base config from "system" Feb 13 19:51:32.746465 ignition[757]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:32.755761 unknown[757]: fetched user config from "gcp" Feb 13 19:51:32.746477 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:32.758160 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:51:32.746606 ignition[757]: parsed url from cmdline: "" Feb 13 19:51:32.783026 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:51:32.746613 ignition[757]: no config URL provided Feb 13 19:51:32.828465 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:51:32.746620 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:32.864002 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:51:32.746631 ignition[757]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:32.895279 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:51:32.746658 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 19:51:32.913130 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:32.750502 ignition[757]: GET result: OK Feb 13 19:51:32.930945 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:51:32.750594 ignition[757]: parsing config with SHA512: 4fc3d15d7233892922069cb86f01166ff65485ce91d9a4cb562361105129e08428522fd031c82753650f9a821022c4f50ac66eb07309da8f9cdcf9b39077bfbd Feb 13 19:51:32.948957 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:32.756184 ignition[757]: fetch: fetch complete Feb 13 19:51:32.962969 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:32.756195 ignition[757]: fetch: fetch passed Feb 13 19:51:32.978950 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:32.756267 ignition[757]: Ignition finished successfully Feb 13 19:51:32.999110 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:51:32.816785 ignition[763]: Ignition 2.20.0 Feb 13 19:51:32.816797 ignition[763]: Stage: kargs Feb 13 19:51:32.817205 ignition[763]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:32.817224 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:32.818044 ignition[763]: kargs: kargs passed Feb 13 19:51:32.818101 ignition[763]: Ignition finished successfully Feb 13 19:51:32.883905 ignition[769]: Ignition 2.20.0 Feb 13 19:51:32.883916 ignition[769]: Stage: disks Feb 13 19:51:32.884144 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:32.884156 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:32.885091 ignition[769]: disks: disks passed Feb 13 19:51:32.885145 ignition[769]: Ignition finished successfully Feb 13 19:51:33.060454 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:51:33.242921 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:51:33.247896 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:51:33.402778 kernel: EXT4-fs (sda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:51:33.404206 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:51:33.405126 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:33.428055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:33.459903 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:51:33.503506 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (785) Feb 13 19:51:33.503582 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:33.503620 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:33.503644 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:33.460688 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:51:33.460805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:51:33.561058 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:33.561113 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:33.460853 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:33.534184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:33.570585 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:51:33.593011 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:51:33.736555 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:51:33.747079 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:51:33.758266 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:51:33.767953 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:51:33.921773 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:33.926899 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:51:33.964793 kernel: BTRFS info (device sda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:33.976029 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:51:33.986182 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:51:34.014916 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:51:34.026583 ignition[897]: INFO : Ignition 2.20.0 Feb 13 19:51:34.026583 ignition[897]: INFO : Stage: mount Feb 13 19:51:34.052902 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:34.052902 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:34.052902 ignition[897]: INFO : mount: mount passed Feb 13 19:51:34.052902 ignition[897]: INFO : Ignition finished successfully Feb 13 19:51:34.032362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:51:34.044897 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:51:34.417105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:34.451493 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (909) Feb 13 19:51:34.451537 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:34.451563 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:34.423060 systemd-networkd[746]: eth0: Gained IPv6LL Feb 13 19:51:34.484085 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:34.484123 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:34.484139 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:34.481180 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:34.517485 ignition[926]: INFO : Ignition 2.20.0 Feb 13 19:51:34.517485 ignition[926]: INFO : Stage: files Feb 13 19:51:34.531996 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:34.531996 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:34.531996 ignition[926]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:51:34.531996 ignition[926]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:51:34.531996 ignition[926]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:51:34.531996 ignition[926]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:51:34.531996 ignition[926]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:51:34.531996 ignition[926]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:51:34.526264 unknown[926]: wrote ssh authorized keys file for user: core Feb 13 19:51:34.860352 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:51:35.279109 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:51:35.297940 ignition[926]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:35.297940 ignition[926]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:35.297940 ignition[926]: INFO : files: files passed Feb 13 19:51:35.297940 ignition[926]: INFO : Ignition finished successfully Feb 13 19:51:35.281965 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:51:35.304207 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:51:35.349915 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:51:35.381418 initrd-setup-root-after-ignition[953]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:35.381418 initrd-setup-root-after-ignition[953]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:35.440032 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:35.401452 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:51:35.401586 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:51:35.426357 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:35.452392 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:51:35.481000 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:51:35.545498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:51:35.545686 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:51:35.564767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:51:35.584983 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:51:35.602054 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:51:35.608964 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:51:35.666949 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:35.693036 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:51:35.735371 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:35.747280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:35.757349 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:51:35.777346 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:51:35.777547 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:35.810342 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:51:35.821293 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:51:35.838349 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:51:35.854357 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:35.892098 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:35.892517 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:51:35.910301 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:35.948127 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:51:35.948570 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:51:35.965331 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:51:35.996030 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:51:35.996390 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:36.023171 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:36.023571 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:36.041211 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:51:36.041383 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:36.079153 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:51:36.079368 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:36.110150 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:51:36.110395 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:36.131234 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:51:36.131437 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:51:36.158021 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:51:36.190052 ignition[978]: INFO : Ignition 2.20.0 Feb 13 19:51:36.190052 ignition[978]: INFO : Stage: umount Feb 13 19:51:36.190052 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:36.190052 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:36.190052 ignition[978]: INFO : umount: umount passed Feb 13 19:51:36.190052 ignition[978]: INFO : Ignition finished successfully Feb 13 19:51:36.205072 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:51:36.206142 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:51:36.206416 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:36.274202 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:51:36.274438 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:36.308668 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:51:36.309801 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:51:36.309926 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:51:36.325639 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:51:36.325786 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:51:36.347202 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:51:36.347335 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:51:36.366139 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:51:36.366222 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:51:36.383006 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:51:36.383099 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:51:36.401015 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:51:36.401104 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:51:36.419036 systemd[1]: Stopped target network.target - Network. Feb 13 19:51:36.434925 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:51:36.435069 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:36.455030 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:51:36.474041 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:51:36.474161 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:36.492928 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:51:36.507933 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:51:36.523003 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:51:36.523094 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:36.541041 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:51:36.541128 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:36.558983 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:51:36.559093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:51:36.577025 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:51:36.577128 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:36.595025 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:51:36.595157 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:36.615262 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:51:36.619842 systemd-networkd[746]: eth0: DHCPv6 lease lost Feb 13 19:51:36.636122 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:51:36.644635 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:51:36.644798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:51:36.662884 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:51:36.663152 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:51:36.684213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:51:36.684274 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:36.703901 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:51:36.716031 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:51:36.716159 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:36.736038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:51:36.736127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:36.754027 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:51:36.754138 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:36.772990 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:51:36.773102 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:36.792153 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:36.816246 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:51:36.816497 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:36.829521 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:51:36.829615 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:36.869017 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:51:36.869095 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:37.255942 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 19:51:36.888993 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:51:36.889102 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:36.918943 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:51:36.919065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:36.945933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:36.946064 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:36.981970 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:51:37.019914 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:51:37.020155 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:37.041108 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:51:37.041211 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:37.051216 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:51:37.051293 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:37.072201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:37.072279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:37.108887 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:51:37.109031 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:51:37.118531 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:51:37.118651 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:51:37.136593 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:51:37.179004 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:51:37.204508 systemd[1]: Switching root. Feb 13 19:51:37.475916 systemd-journald[184]: Journal stopped Feb 13 19:51:29.090830 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:51:29.090880 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:29.090897 kernel: BIOS-provided physical RAM map: Feb 13 19:51:29.090911 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 19:51:29.090923 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 19:51:29.090936 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 19:51:29.090952 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 19:51:29.090971 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 19:51:29.090984 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd327fff] usable Feb 13 19:51:29.090999 kernel: BIOS-e820: [mem 0x00000000bd328000-0x00000000bd330fff] ACPI data Feb 13 19:51:29.091013 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 19:51:29.091026 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 19:51:29.091041 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 19:51:29.091053 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 19:51:29.091073 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 19:51:29.091087 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 19:51:29.091103 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 19:51:29.091117 kernel: NX (Execute Disable) protection: active Feb 13 19:51:29.091130 kernel: APIC: Static calls initialized Feb 13 19:51:29.091145 kernel: efi: EFI v2.7 by EDK II Feb 13 19:51:29.091161 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd328018 Feb 13 19:51:29.091178 kernel: random: crng init done Feb 13 19:51:29.091193 kernel: secureboot: Secure boot disabled Feb 13 19:51:29.091208 kernel: SMBIOS 2.4 present. Feb 13 19:51:29.091229 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 19:51:29.091245 kernel: Hypervisor detected: KVM Feb 13 19:51:29.091262 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:51:29.091279 kernel: kvm-clock: using sched offset of 13451446801 cycles Feb 13 19:51:29.091296 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:51:29.091314 kernel: tsc: Detected 2299.998 MHz processor Feb 13 19:51:29.091331 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:51:29.091349 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:51:29.091366 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 19:51:29.091387 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 19:51:29.091405 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:51:29.091421 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 19:51:29.091438 kernel: Using GB pages for direct mapping Feb 13 19:51:29.091455 kernel: ACPI: Early table checksum verification disabled Feb 13 19:51:29.091471 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 19:51:29.091489 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 19:51:29.091513 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 19:51:29.091535 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 19:51:29.091553 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 19:51:29.091570 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 19:51:29.091589 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 19:51:29.091607 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 19:51:29.091624 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 19:51:29.091645 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 19:51:29.091662 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 19:51:29.091691 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 19:51:29.091709 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 19:51:29.091726 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 19:51:29.093198 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 19:51:29.093219 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 19:51:29.093238 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 19:51:29.093256 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 19:51:29.093282 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 19:51:29.093300 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 19:51:29.093319 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:51:29.093337 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:51:29.093355 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 19:51:29.093374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 19:51:29.093392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 19:51:29.093410 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 19:51:29.093428 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 19:51:29.093451 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 13 19:51:29.093469 kernel: Zone ranges: Feb 13 19:51:29.093488 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:51:29.093505 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:51:29.093523 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:51:29.093541 kernel: Movable zone start for each node Feb 13 19:51:29.093560 kernel: Early memory node ranges Feb 13 19:51:29.093578 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 19:51:29.093596 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 19:51:29.093614 kernel: node 0: [mem 0x0000000000100000-0x00000000bd327fff] Feb 13 19:51:29.093636 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 19:51:29.093655 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 19:51:29.093679 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:51:29.093698 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 19:51:29.093716 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:51:29.093748 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 19:51:29.093775 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 19:51:29.093791 kernel: On node 0, zone DMA32: 9 pages in unavailable ranges Feb 13 19:51:29.093807 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:51:29.093828 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 19:51:29.093846 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:51:29.093864 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:51:29.093881 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:51:29.093899 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:51:29.093917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:51:29.093934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:51:29.093952 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:51:29.093970 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:51:29.093993 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:51:29.094011 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 19:51:29.094028 kernel: Booting paravirtualized kernel on KVM Feb 13 19:51:29.094048 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:51:29.094066 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:51:29.094084 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:51:29.094102 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:51:29.094120 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:51:29.094138 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:51:29.094159 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:51:29.094180 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:29.094199 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:51:29.094217 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 19:51:29.094236 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:51:29.094254 kernel: Fallback order for Node 0: 0 Feb 13 19:51:29.094272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932271 Feb 13 19:51:29.094290 kernel: Policy zone: Normal Feb 13 19:51:29.094313 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:51:29.094331 kernel: software IO TLB: area num 2. Feb 13 19:51:29.094351 kernel: Memory: 7513364K/7860548K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 346928K reserved, 0K cma-reserved) Feb 13 19:51:29.094368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:51:29.094387 kernel: Kernel/User page tables isolation: enabled Feb 13 19:51:29.094406 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:51:29.094424 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:51:29.094443 kernel: Dynamic Preempt: voluntary Feb 13 19:51:29.094479 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:51:29.094500 kernel: rcu: RCU event tracing is enabled. Feb 13 19:51:29.094520 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:51:29.094541 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:51:29.094564 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:51:29.094583 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:51:29.094602 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:51:29.094622 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:51:29.094658 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:51:29.094690 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:51:29.094708 kernel: Console: colour dummy device 80x25 Feb 13 19:51:29.094902 kernel: printk: console [ttyS0] enabled Feb 13 19:51:29.094927 kernel: ACPI: Core revision 20230628 Feb 13 19:51:29.094945 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:51:29.094964 kernel: x2apic enabled Feb 13 19:51:29.094983 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:51:29.095001 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 19:51:29.095029 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:51:29.095055 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 19:51:29.095073 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 19:51:29.095094 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 19:51:29.095114 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:51:29.095132 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:51:29.095150 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:51:29.095168 kernel: Spectre V2 : Mitigation: IBRS Feb 13 19:51:29.095186 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:51:29.095207 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:51:29.095224 kernel: RETBleed: Mitigation: IBRS Feb 13 19:51:29.095243 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:51:29.095262 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 19:51:29.095289 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:51:29.095310 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 19:51:29.095328 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:51:29.095347 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:51:29.095365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:51:29.095388 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:51:29.095406 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:51:29.095425 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 19:51:29.095443 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:51:29.095462 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:51:29.095489 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:51:29.095509 kernel: landlock: Up and running. Feb 13 19:51:29.095529 kernel: SELinux: Initializing. Feb 13 19:51:29.095549 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:51:29.095572 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:51:29.095590 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 19:51:29.095608 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:29.095627 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:29.095646 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:51:29.095665 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 19:51:29.095722 kernel: signal: max sigframe size: 1776 Feb 13 19:51:29.095769 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:51:29.095789 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:51:29.095812 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:51:29.095830 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:51:29.095849 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:51:29.095867 kernel: .... node #0, CPUs: #1 Feb 13 19:51:29.095886 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:51:29.095913 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:51:29.095932 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:51:29.095951 kernel: smpboot: Max logical packages: 1 Feb 13 19:51:29.095974 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 19:51:29.095994 kernel: devtmpfs: initialized Feb 13 19:51:29.096013 kernel: x86/mm: Memory block size: 128MB Feb 13 19:51:29.096031 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 19:51:29.096050 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:51:29.096069 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:51:29.096087 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:51:29.096106 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:51:29.096125 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:51:29.096157 kernel: audit: type=2000 audit(1739476287.756:1): state=initialized audit_enabled=0 res=1 Feb 13 19:51:29.096176 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:51:29.096195 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:51:29.096215 kernel: cpuidle: using governor menu Feb 13 19:51:29.096235 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:51:29.096254 kernel: dca service started, version 1.12.1 Feb 13 19:51:29.096272 kernel: PCI: Using configuration type 1 for base access Feb 13 19:51:29.096290 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:51:29.096308 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:51:29.096331 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:51:29.096373 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:51:29.096393 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:51:29.096412 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:51:29.096430 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:51:29.096450 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:51:29.096470 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:51:29.096490 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:51:29.096510 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:51:29.096534 kernel: ACPI: Interpreter enabled Feb 13 19:51:29.096553 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:51:29.096573 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:51:29.096593 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:51:29.096612 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 19:51:29.096631 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:51:29.096651 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:51:29.096957 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:51:29.097165 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:51:29.097386 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:51:29.097412 kernel: PCI host bridge to bus 0000:00 Feb 13 19:51:29.097589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:51:29.097799 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:51:29.097963 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:51:29.098122 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 19:51:29.098288 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:51:29.098484 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:51:29.098679 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 19:51:29.099233 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 19:51:29.099421 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:51:29.099608 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 19:51:29.099844 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 19:51:29.100026 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 19:51:29.100212 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:51:29.100392 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 19:51:29.100569 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 19:51:29.100777 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:51:29.100960 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 19:51:29.101147 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 19:51:29.101170 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:51:29.101189 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:51:29.101207 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:51:29.101226 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:51:29.101244 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:51:29.101262 kernel: iommu: Default domain type: Translated Feb 13 19:51:29.101281 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:51:29.101299 kernel: efivars: Registered efivars operations Feb 13 19:51:29.101322 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:51:29.101341 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:51:29.101359 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 19:51:29.101377 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 19:51:29.101395 kernel: e820: reserve RAM buffer [mem 0xbd328000-0xbfffffff] Feb 13 19:51:29.101413 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 19:51:29.101431 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 19:51:29.101449 kernel: vgaarb: loaded Feb 13 19:51:29.101467 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:51:29.101490 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:51:29.101508 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:51:29.101527 kernel: pnp: PnP ACPI init Feb 13 19:51:29.101545 kernel: pnp: PnP ACPI: found 7 devices Feb 13 19:51:29.101564 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:51:29.101582 kernel: NET: Registered PF_INET protocol family Feb 13 19:51:29.101601 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:51:29.101620 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 19:51:29.101639 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:51:29.102757 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:51:29.102787 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:51:29.102805 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 19:51:29.102824 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:51:29.102842 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:51:29.102859 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:51:29.102877 kernel: NET: Registered PF_XDP protocol family Feb 13 19:51:29.103085 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:51:29.103263 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:51:29.103442 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:51:29.103615 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 19:51:29.104015 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:51:29.104048 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:51:29.104068 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:51:29.104086 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 19:51:29.104109 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:51:29.104126 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:51:29.104145 kernel: clocksource: Switched to clocksource tsc Feb 13 19:51:29.104164 kernel: Initialise system trusted keyrings Feb 13 19:51:29.104182 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 19:51:29.104201 kernel: Key type asymmetric registered Feb 13 19:51:29.104220 kernel: Asymmetric key parser 'x509' registered Feb 13 19:51:29.104238 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:51:29.104255 kernel: io scheduler mq-deadline registered Feb 13 19:51:29.104277 kernel: io scheduler kyber registered Feb 13 19:51:29.104295 kernel: io scheduler bfq registered Feb 13 19:51:29.104314 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:51:29.104335 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 19:51:29.104533 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 19:51:29.104557 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 19:51:29.104772 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 19:51:29.104798 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 19:51:29.104989 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 19:51:29.105021 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:51:29.105038 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:51:29.105056 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:51:29.105073 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 19:51:29.105092 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 19:51:29.105297 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 19:51:29.105327 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:51:29.105348 kernel: i8042: Warning: Keylock active Feb 13 19:51:29.105373 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:51:29.105392 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:51:29.105590 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:51:29.105802 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:51:29.105970 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:51:28 UTC (1739476288) Feb 13 19:51:29.106129 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:51:29.106152 kernel: intel_pstate: CPU model not supported Feb 13 19:51:29.106170 kernel: pstore: Using crash dump compression: deflate Feb 13 19:51:29.106194 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:51:29.106213 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:51:29.106231 kernel: Segment Routing with IPv6 Feb 13 19:51:29.106250 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:51:29.106268 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:51:29.106286 kernel: Key type dns_resolver registered Feb 13 19:51:29.106304 kernel: IPI shorthand broadcast: enabled Feb 13 19:51:29.106321 kernel: sched_clock: Marking stable (891004436, 171009140)->(1110276310, -48262734) Feb 13 19:51:29.106340 kernel: registered taskstats version 1 Feb 13 19:51:29.106362 kernel: Loading compiled-in X.509 certificates Feb 13 19:51:29.106380 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:51:29.106398 kernel: Key type .fscrypt registered Feb 13 19:51:29.106416 kernel: Key type fscrypt-provisioning registered Feb 13 19:51:29.106433 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:51:29.106449 kernel: ima: No architecture policies found Feb 13 19:51:29.106467 kernel: clk: Disabling unused clocks Feb 13 19:51:29.106486 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:51:29.106504 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:51:29.106525 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:51:29.106543 kernel: Run /init as init process Feb 13 19:51:29.106559 kernel: with arguments: Feb 13 19:51:29.106577 kernel: /init Feb 13 19:51:29.106595 kernel: with environment: Feb 13 19:51:29.106610 kernel: HOME=/ Feb 13 19:51:29.106626 kernel: TERM=linux Feb 13 19:51:29.106643 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:51:29.106660 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 19:51:29.106697 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:51:29.106719 systemd[1]: Detected virtualization google. Feb 13 19:51:29.106880 systemd[1]: Detected architecture x86-64. Feb 13 19:51:29.106900 systemd[1]: Running in initrd. Feb 13 19:51:29.106919 systemd[1]: No hostname configured, using default hostname. Feb 13 19:51:29.106937 systemd[1]: Hostname set to . Feb 13 19:51:29.106957 systemd[1]: Initializing machine ID from random generator. Feb 13 19:51:29.106982 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:51:29.107003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:29.107023 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:29.107044 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:51:29.107065 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:29.107084 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:51:29.107103 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:51:29.107132 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:51:29.107170 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:51:29.107195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:29.107215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:29.107237 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:29.107262 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:29.107283 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:29.107304 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:29.107325 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:29.107346 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:29.107368 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:51:29.107392 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:51:29.107413 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:29.107434 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:29.107458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:29.107479 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:29.107499 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:51:29.107520 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:29.107541 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:51:29.107561 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:51:29.107581 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:29.107603 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:29.107623 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:29.107648 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:29.107725 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 19:51:29.107785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:29.107803 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:51:29.107828 systemd-journald[184]: Journal started Feb 13 19:51:29.107862 systemd-journald[184]: Runtime Journal (/run/log/journal/304ea9aa763d4cf1b2983c672e5133ef) is 8.0M, max 148.7M, 140.7M free. Feb 13 19:51:29.112169 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:29.112745 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 19:51:29.125978 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:29.137960 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:29.139981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:29.163147 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:29.164828 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:51:29.167341 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:29.172090 kernel: Bridge firewalling registered Feb 13 19:51:29.173865 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 19:51:29.178041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:29.178662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:29.192278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:29.193091 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:29.216266 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:29.225399 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:29.230304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:29.240058 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:51:29.249265 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:29.270676 dracut-cmdline[217]: dracut-dracut-053 Feb 13 19:51:29.275443 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:29.309176 systemd-resolved[219]: Positive Trust Anchors: Feb 13 19:51:29.309198 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:29.309270 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:29.316651 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 19:51:29.318600 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:29.324586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:29.385778 kernel: SCSI subsystem initialized Feb 13 19:51:29.397761 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:51:29.409765 kernel: iscsi: registered transport (tcp) Feb 13 19:51:29.433888 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:51:29.433994 kernel: QLogic iSCSI HBA Driver Feb 13 19:51:29.488156 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:29.493011 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:51:29.536777 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:51:29.536877 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:51:29.536905 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:51:29.586769 kernel: raid6: avx2x4 gen() 17805 MB/s Feb 13 19:51:29.607770 kernel: raid6: avx2x2 gen() 17717 MB/s Feb 13 19:51:29.633919 kernel: raid6: avx2x1 gen() 13562 MB/s Feb 13 19:51:29.633998 kernel: raid6: using algorithm avx2x4 gen() 17805 MB/s Feb 13 19:51:29.660789 kernel: raid6: .... xor() 6709 MB/s, rmw enabled Feb 13 19:51:29.660869 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:51:29.690779 kernel: xor: automatically using best checksumming function avx Feb 13 19:51:29.872773 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:51:29.887088 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:29.902029 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:29.938162 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 19:51:29.945396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:29.978019 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:51:30.017485 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Feb 13 19:51:30.056081 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:30.062194 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:30.160069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:30.197034 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:51:30.249766 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:30.270703 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:30.300909 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:51:30.287998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:30.324039 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:51:30.329765 kernel: AES CTR mode by8 optimization enabled Feb 13 19:51:30.329855 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:51:30.356911 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:30.387793 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 19:51:30.401028 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:51:30.411168 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:30.411368 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:30.475921 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 19:51:30.522594 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 19:51:30.523511 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 19:51:30.523812 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 19:51:30.524049 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:51:30.524284 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:51:30.524313 kernel: GPT:17805311 != 25165823 Feb 13 19:51:30.524338 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:51:30.524363 kernel: GPT:17805311 != 25165823 Feb 13 19:51:30.524398 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:51:30.524429 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:30.524454 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 19:51:30.476066 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:30.476294 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:30.476522 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:30.592688 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (447) Feb 13 19:51:30.516456 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:30.614942 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (459) Feb 13 19:51:30.539304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:30.560754 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:30.639005 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 19:51:30.653437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:30.677707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 19:51:30.705255 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 19:51:30.705510 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 19:51:30.757248 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:51:30.762033 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:51:30.794004 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:30.819200 disk-uuid[540]: Primary Header is updated. Feb 13 19:51:30.819200 disk-uuid[540]: Secondary Entries is updated. Feb 13 19:51:30.819200 disk-uuid[540]: Secondary Header is updated. Feb 13 19:51:30.837759 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:30.856517 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:30.882990 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:31.883349 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:51:31.883439 disk-uuid[541]: The operation has completed successfully. Feb 13 19:51:31.965037 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:51:31.965193 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:51:31.982974 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:51:32.025261 sh[563]: Success Feb 13 19:51:32.049771 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:51:32.143043 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:51:32.150978 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:51:32.174368 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:51:32.230193 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:51:32.230287 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:32.230313 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:51:32.239621 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:51:32.252246 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:51:32.281796 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:51:32.291676 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:51:32.292698 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:51:32.299995 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:51:32.362935 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:32.362978 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:32.363003 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:32.353206 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:51:32.387919 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:32.387964 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:32.405818 kernel: BTRFS info (device sda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:32.423585 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:51:32.441068 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:51:32.513938 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:32.520172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:32.635689 systemd-networkd[746]: lo: Link UP Feb 13 19:51:32.636177 systemd-networkd[746]: lo: Gained carrier Feb 13 19:51:32.638660 systemd-networkd[746]: Enumeration completed Feb 13 19:51:32.649360 ignition[690]: Ignition 2.20.0 Feb 13 19:51:32.638865 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:32.649370 ignition[690]: Stage: fetch-offline Feb 13 19:51:32.640283 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:32.649417 ignition[690]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:32.640290 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:32.649428 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:32.642627 systemd-networkd[746]: eth0: Link UP Feb 13 19:51:32.649565 ignition[690]: parsed url from cmdline: "" Feb 13 19:51:32.642634 systemd-networkd[746]: eth0: Gained carrier Feb 13 19:51:32.649572 ignition[690]: no config URL provided Feb 13 19:51:32.642649 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:32.649578 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:32.652877 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.69/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:51:32.649588 ignition[690]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:32.661314 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:32.649597 ignition[690]: failed to fetch config: resource requires networking Feb 13 19:51:32.679658 systemd[1]: Reached target network.target - Network. Feb 13 19:51:32.650018 ignition[690]: Ignition finished successfully Feb 13 19:51:32.699034 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:51:32.746254 ignition[757]: Ignition 2.20.0 Feb 13 19:51:32.755716 unknown[757]: fetched base config from "system" Feb 13 19:51:32.746263 ignition[757]: Stage: fetch Feb 13 19:51:32.755750 unknown[757]: fetched base config from "system" Feb 13 19:51:32.746465 ignition[757]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:32.755761 unknown[757]: fetched user config from "gcp" Feb 13 19:51:32.746477 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:32.758160 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:51:32.746606 ignition[757]: parsed url from cmdline: "" Feb 13 19:51:32.783026 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:51:32.746613 ignition[757]: no config URL provided Feb 13 19:51:32.828465 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:51:32.746620 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:32.864002 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:51:32.746631 ignition[757]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:32.895279 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:51:32.746658 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 19:51:32.913130 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:32.750502 ignition[757]: GET result: OK Feb 13 19:51:32.930945 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:51:32.750594 ignition[757]: parsing config with SHA512: 4fc3d15d7233892922069cb86f01166ff65485ce91d9a4cb562361105129e08428522fd031c82753650f9a821022c4f50ac66eb07309da8f9cdcf9b39077bfbd Feb 13 19:51:32.948957 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:32.756184 ignition[757]: fetch: fetch complete Feb 13 19:51:32.962969 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:32.756195 ignition[757]: fetch: fetch passed Feb 13 19:51:32.978950 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:32.756267 ignition[757]: Ignition finished successfully Feb 13 19:51:32.999110 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:51:32.816785 ignition[763]: Ignition 2.20.0 Feb 13 19:51:32.816797 ignition[763]: Stage: kargs Feb 13 19:51:32.817205 ignition[763]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:32.817224 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:32.818044 ignition[763]: kargs: kargs passed Feb 13 19:51:32.818101 ignition[763]: Ignition finished successfully Feb 13 19:51:32.883905 ignition[769]: Ignition 2.20.0 Feb 13 19:51:32.883916 ignition[769]: Stage: disks Feb 13 19:51:32.884144 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:32.884156 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:32.885091 ignition[769]: disks: disks passed Feb 13 19:51:32.885145 ignition[769]: Ignition finished successfully Feb 13 19:51:33.060454 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:51:33.242921 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:51:33.247896 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:51:33.402778 kernel: EXT4-fs (sda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:51:33.404206 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:51:33.405126 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:33.428055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:33.459903 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:51:33.503506 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (785) Feb 13 19:51:33.503582 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:33.503620 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:33.503644 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:33.460688 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:51:33.460805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:51:33.561058 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:33.561113 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:33.460853 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:33.534184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:33.570585 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:51:33.593011 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:51:33.736555 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:51:33.747079 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:51:33.758266 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:51:33.767953 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:51:33.921773 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:33.926899 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:51:33.964793 kernel: BTRFS info (device sda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:33.976029 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:51:33.986182 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:51:34.014916 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:51:34.026583 ignition[897]: INFO : Ignition 2.20.0 Feb 13 19:51:34.026583 ignition[897]: INFO : Stage: mount Feb 13 19:51:34.052902 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:34.052902 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:34.052902 ignition[897]: INFO : mount: mount passed Feb 13 19:51:34.052902 ignition[897]: INFO : Ignition finished successfully Feb 13 19:51:34.032362 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:51:34.044897 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:51:34.417105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:34.451493 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (909) Feb 13 19:51:34.451537 kernel: BTRFS info (device sda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:34.451563 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:34.423060 systemd-networkd[746]: eth0: Gained IPv6LL Feb 13 19:51:34.484085 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:51:34.484123 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:51:34.484139 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:51:34.481180 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:34.517485 ignition[926]: INFO : Ignition 2.20.0 Feb 13 19:51:34.517485 ignition[926]: INFO : Stage: files Feb 13 19:51:34.531996 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:34.531996 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:34.531996 ignition[926]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:51:34.531996 ignition[926]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:51:34.531996 ignition[926]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:51:34.531996 ignition[926]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:51:34.531996 ignition[926]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:51:34.531996 ignition[926]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:51:34.531996 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:51:34.526264 unknown[926]: wrote ssh authorized keys file for user: core Feb 13 19:51:34.860352 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:51:35.279109 ignition[926]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:51:35.297940 ignition[926]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:35.297940 ignition[926]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:35.297940 ignition[926]: INFO : files: files passed Feb 13 19:51:35.297940 ignition[926]: INFO : Ignition finished successfully Feb 13 19:51:35.281965 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:51:35.304207 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:51:35.349915 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:51:35.381418 initrd-setup-root-after-ignition[953]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:35.381418 initrd-setup-root-after-ignition[953]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:35.440032 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:35.401452 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:51:35.401586 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:51:35.426357 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:35.452392 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:51:35.481000 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:51:35.545498 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:51:35.545686 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:51:35.564767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:51:35.584983 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:51:35.602054 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:51:35.608964 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:51:35.666949 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:35.693036 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:51:35.735371 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:35.747280 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:35.757349 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:51:35.777346 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:51:35.777547 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:35.810342 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:51:35.821293 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:51:35.838349 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:51:35.854357 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:35.892098 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:35.892517 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:51:35.910301 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:35.948127 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:51:35.948570 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:51:35.965331 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:51:35.996030 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:51:35.996390 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:36.023171 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:36.023571 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:36.041211 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:51:36.041383 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:36.079153 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:51:36.079368 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:36.110150 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:51:36.110395 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:36.131234 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:51:36.131437 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:51:36.158021 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:51:36.190052 ignition[978]: INFO : Ignition 2.20.0 Feb 13 19:51:36.190052 ignition[978]: INFO : Stage: umount Feb 13 19:51:36.190052 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:36.190052 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:51:36.190052 ignition[978]: INFO : umount: umount passed Feb 13 19:51:36.190052 ignition[978]: INFO : Ignition finished successfully Feb 13 19:51:36.205072 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:51:36.206142 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:51:36.206416 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:36.274202 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:51:36.274438 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:36.308668 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:51:36.309801 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:51:36.309926 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:51:36.325639 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:51:36.325786 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:51:36.347202 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:51:36.347335 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:51:36.366139 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:51:36.366222 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:51:36.383006 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:51:36.383099 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:51:36.401015 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:51:36.401104 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:51:36.419036 systemd[1]: Stopped target network.target - Network. Feb 13 19:51:36.434925 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:51:36.435069 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:36.455030 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:51:36.474041 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:51:36.474161 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:36.492928 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:51:36.507933 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:51:36.523003 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:51:36.523094 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:36.541041 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:51:36.541128 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:36.558983 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:51:36.559093 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:51:36.577025 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:51:36.577128 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:36.595025 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:51:36.595157 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:36.615262 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:51:36.619842 systemd-networkd[746]: eth0: DHCPv6 lease lost Feb 13 19:51:36.636122 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:51:36.644635 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:51:36.644798 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:51:36.662884 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:51:36.663152 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:51:36.684213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:51:36.684274 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:36.703901 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:51:36.716031 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:51:36.716159 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:36.736038 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:51:36.736127 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:36.754027 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:51:36.754138 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:36.772990 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:51:36.773102 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:36.792153 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:36.816246 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:51:36.816497 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:36.829521 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:51:36.829615 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:36.869017 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:51:36.869095 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:37.255942 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 19:51:36.888993 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:51:36.889102 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:36.918943 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:51:36.919065 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:36.945933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:36.946064 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:36.981970 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:51:37.019914 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:51:37.020155 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:37.041108 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:51:37.041211 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:37.051216 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:51:37.051293 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:37.072201 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:37.072279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:37.108887 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:51:37.109031 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:51:37.118531 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:51:37.118651 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:51:37.136593 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:51:37.179004 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:51:37.204508 systemd[1]: Switching root. Feb 13 19:51:37.475916 systemd-journald[184]: Journal stopped Feb 13 19:51:39.928170 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:51:39.928227 kernel: SELinux: policy capability open_perms=1 Feb 13 19:51:39.928249 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:51:39.928267 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:51:39.928284 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:51:39.928302 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:51:39.928323 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:51:39.928345 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:51:39.928364 kernel: audit: type=1403 audit(1739476297.803:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:51:39.928385 systemd[1]: Successfully loaded SELinux policy in 82.565ms. Feb 13 19:51:39.928408 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.468ms. Feb 13 19:51:39.928430 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:51:39.928451 systemd[1]: Detected virtualization google. Feb 13 19:51:39.928471 systemd[1]: Detected architecture x86-64. Feb 13 19:51:39.928496 systemd[1]: Detected first boot. Feb 13 19:51:39.928519 systemd[1]: Initializing machine ID from random generator. Feb 13 19:51:39.928541 zram_generator::config[1019]: No configuration found. Feb 13 19:51:39.928564 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:51:39.928584 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:51:39.928609 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:51:39.928631 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:51:39.928653 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:51:39.928676 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:51:39.928698 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:51:39.928720 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:51:39.928755 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:51:39.928781 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:51:39.928803 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:51:39.928824 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:51:39.928846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:39.928869 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:39.928890 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:51:39.928911 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:51:39.928940 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:51:39.928966 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:39.928986 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:51:39.929006 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:39.929025 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:51:39.929044 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:51:39.929062 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:39.929092 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:51:39.929112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:39.929132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:39.929158 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:39.929177 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:39.929198 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:51:39.929217 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:51:39.929240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:39.929263 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:39.929286 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:39.929316 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:51:39.929339 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:51:39.929360 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:51:39.929383 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:51:39.929404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:39.929433 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:51:39.929457 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:51:39.929481 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:51:39.929504 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:51:39.929850 systemd[1]: Reached target machines.target - Containers. Feb 13 19:51:39.929886 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:51:39.929908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:39.929929 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:39.929958 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:51:39.929980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:39.930081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:51:39.930104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:39.930127 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:51:39.930146 kernel: ACPI: bus type drm_connector registered Feb 13 19:51:39.930170 kernel: fuse: init (API version 7.39) Feb 13 19:51:39.930203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:39.930230 kernel: loop: module loaded Feb 13 19:51:39.930252 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:51:39.930275 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:51:39.930299 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:51:39.930324 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:51:39.930348 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:51:39.930372 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:39.930396 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:39.930466 systemd-journald[1106]: Collecting audit messages is disabled. Feb 13 19:51:39.930519 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:51:39.930545 systemd-journald[1106]: Journal started Feb 13 19:51:39.930593 systemd-journald[1106]: Runtime Journal (/run/log/journal/73dcef4d1e8348d0bf37b1248240aeb6) is 8.0M, max 148.7M, 140.7M free. Feb 13 19:51:38.700107 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:51:38.721904 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:51:38.722510 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:51:39.956780 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:51:39.988771 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:40.011112 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:51:40.011235 systemd[1]: Stopped verity-setup.service. Feb 13 19:51:40.036828 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:40.046853 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:40.057380 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:51:40.068208 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:51:40.079185 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:51:40.090178 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:51:40.100188 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:51:40.110155 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:51:40.120292 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:51:40.132320 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:40.144390 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:51:40.144625 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:51:40.156333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:40.156571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:40.168348 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:51:40.168592 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:51:40.179348 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:40.179584 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:40.191336 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:51:40.191563 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:51:40.202345 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:40.202572 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:40.213335 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:40.223326 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:51:40.235360 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:51:40.247315 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:40.272207 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:51:40.290931 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:51:40.302507 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:51:40.313006 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:51:40.313272 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:40.324412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:51:40.341033 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:51:40.364933 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:51:40.375143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:40.380515 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:51:40.396454 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:51:40.408073 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:40.414580 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:51:40.426939 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:40.440319 systemd-journald[1106]: Time spent on flushing to /var/log/journal/73dcef4d1e8348d0bf37b1248240aeb6 is 78.934ms for 914 entries. Feb 13 19:51:40.440319 systemd-journald[1106]: System Journal (/var/log/journal/73dcef4d1e8348d0bf37b1248240aeb6) is 8.0M, max 584.8M, 576.8M free. Feb 13 19:51:40.557655 systemd-journald[1106]: Received client request to flush runtime journal. Feb 13 19:51:40.460799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:40.483498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:51:40.502493 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:40.528018 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:51:40.555367 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:51:40.581913 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 19:51:40.568224 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:51:40.585638 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:51:40.597505 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:51:40.609466 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:51:40.622477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:40.653967 systemd-tmpfiles[1138]: ACLs are not supported, ignoring. Feb 13 19:51:40.654007 systemd-tmpfiles[1138]: ACLs are not supported, ignoring. Feb 13 19:51:40.655477 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:51:40.671776 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:51:40.678297 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:51:40.689003 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:51:40.697361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:40.706883 kernel: loop1: detected capacity change from 0 to 210664 Feb 13 19:51:40.725993 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:51:40.737528 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:51:40.744289 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:51:40.821091 kernel: loop2: detected capacity change from 0 to 52056 Feb 13 19:51:40.860864 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:51:40.888114 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:40.909594 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:51:40.934762 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Feb 13 19:51:40.934801 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Feb 13 19:51:40.946137 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:41.011839 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 19:51:41.071956 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 19:51:41.113820 kernel: loop6: detected capacity change from 0 to 52056 Feb 13 19:51:41.154501 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 19:51:41.200103 (sd-merge)[1164]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 19:51:41.201056 (sd-merge)[1164]: Merged extensions into '/usr'. Feb 13 19:51:41.212496 systemd[1]: Reloading requested from client PID 1137 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:51:41.212787 systemd[1]: Reloading... Feb 13 19:51:41.374808 zram_generator::config[1186]: No configuration found. Feb 13 19:51:41.601831 ldconfig[1132]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:51:41.654149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:41.768003 systemd[1]: Reloading finished in 554 ms. Feb 13 19:51:41.798943 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:51:41.809655 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:51:41.835120 systemd[1]: Starting ensure-sysext.service... Feb 13 19:51:41.854911 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:41.872878 systemd[1]: Reloading requested from client PID 1230 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:51:41.872904 systemd[1]: Reloading... Feb 13 19:51:41.920619 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:51:41.923058 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:51:41.925103 systemd-tmpfiles[1231]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:51:41.925915 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Feb 13 19:51:41.926156 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Feb 13 19:51:41.936099 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:51:41.939015 systemd-tmpfiles[1231]: Skipping /boot Feb 13 19:51:41.982566 systemd-tmpfiles[1231]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:51:41.983992 systemd-tmpfiles[1231]: Skipping /boot Feb 13 19:51:42.055793 zram_generator::config[1260]: No configuration found. Feb 13 19:51:42.179439 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:42.244682 systemd[1]: Reloading finished in 371 ms. Feb 13 19:51:42.264755 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:51:42.282505 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:42.310228 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:51:42.327219 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:51:42.349901 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:51:42.370176 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:42.389232 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:42.397711 augenrules[1323]: No rules Feb 13 19:51:42.407249 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:51:42.424784 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:51:42.428698 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:51:42.439037 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:51:42.452523 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Feb 13 19:51:42.459282 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:42.459777 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:42.468360 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:42.494700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:42.514005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:42.524025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:42.534320 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:51:42.555035 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:51:42.564868 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:42.570532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:42.584447 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:51:42.598192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:42.598898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:42.611175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:42.612103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:42.624129 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:42.624365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:42.634897 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:51:42.659922 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:51:42.689940 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:51:42.729859 systemd[1]: Finished ensure-sysext.service. Feb 13 19:51:42.744633 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:51:42.745537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:42.753023 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:51:42.762160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:42.772035 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:42.791002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:51:42.822619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:42.844349 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:42.863001 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:51:42.863333 augenrules[1370]: /sbin/augenrules: No change Feb 13 19:51:42.872067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:42.882238 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:42.893023 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:51:42.902955 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:51:42.903017 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:42.904253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:42.905956 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:42.917576 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:51:42.918612 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:51:42.919053 augenrules[1399]: No rules Feb 13 19:51:42.929890 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:51:42.930841 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:51:42.956763 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 19:51:42.969983 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:51:42.962547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:42.962814 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:42.983797 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Feb 13 19:51:42.986392 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:42.987817 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:42.995345 systemd-resolved[1318]: Positive Trust Anchors: Feb 13 19:51:42.995592 systemd-resolved[1318]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:42.995661 systemd-resolved[1318]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:43.003831 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:51:43.013765 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:51:43.021756 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Feb 13 19:51:43.027764 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:51:43.030316 systemd-resolved[1318]: Defaulting to hostname 'linux'. Feb 13 19:51:43.039480 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:43.039588 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:43.044905 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:43.055622 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:51:43.064922 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:43.098098 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1347) Feb 13 19:51:43.097025 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 19:51:43.132955 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:51:43.157118 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:51:43.177407 systemd-networkd[1395]: lo: Link UP Feb 13 19:51:43.177414 systemd-networkd[1395]: lo: Gained carrier Feb 13 19:51:43.182916 systemd-networkd[1395]: Enumeration completed Feb 13 19:51:43.183093 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:43.184320 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:43.184431 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:43.185490 systemd-networkd[1395]: eth0: Link UP Feb 13 19:51:43.185626 systemd-networkd[1395]: eth0: Gained carrier Feb 13 19:51:43.185659 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:43.193074 systemd[1]: Reached target network.target - Network. Feb 13 19:51:43.195831 systemd-networkd[1395]: eth0: DHCPv4 address 10.128.0.69/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:51:43.208013 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:51:43.238976 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 19:51:43.250756 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:51:43.256532 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:51:43.283959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:43.294446 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:51:43.312087 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:51:43.345025 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:51:43.380513 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:51:43.381673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:43.387043 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:51:43.402913 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:51:43.429341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:43.441403 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:51:43.454090 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:43.464134 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:51:43.476036 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:51:43.488200 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:51:43.498144 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:51:43.509999 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:51:43.520983 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:51:43.521060 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:43.529955 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:43.541799 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:51:43.553718 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:51:43.580808 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:51:43.591891 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:51:43.602134 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:43.611956 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:43.620996 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:51:43.621053 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:51:43.626980 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:51:43.646446 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:51:43.669119 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:51:43.690238 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:51:43.711507 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:51:43.718301 jq[1448]: false Feb 13 19:51:43.720942 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:51:43.728044 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:51:43.747039 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:51:43.763092 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:51:43.784047 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:51:43.814064 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:51:43.821836 coreos-metadata[1446]: Feb 13 19:51:43.820 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 19:51:43.825187 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 19:51:43.825953 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:51:43.828337 coreos-metadata[1446]: Feb 13 19:51:43.828 INFO Fetch successful Feb 13 19:51:43.829275 coreos-metadata[1446]: Feb 13 19:51:43.828 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 19:51:43.830703 coreos-metadata[1446]: Feb 13 19:51:43.830 INFO Fetch successful Feb 13 19:51:43.830853 coreos-metadata[1446]: Feb 13 19:51:43.830 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 19:51:43.831114 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:51:43.833643 coreos-metadata[1446]: Feb 13 19:51:43.833 INFO Fetch successful Feb 13 19:51:43.833643 coreos-metadata[1446]: Feb 13 19:51:43.833 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 19:51:43.835871 coreos-metadata[1446]: Feb 13 19:51:43.835 INFO Fetch successful Feb 13 19:51:43.838428 extend-filesystems[1449]: Found loop4 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found loop5 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found loop6 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found loop7 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found sda Feb 13 19:51:43.845174 extend-filesystems[1449]: Found sda1 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found sda2 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found sda3 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found usr Feb 13 19:51:43.845174 extend-filesystems[1449]: Found sda4 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found sda6 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found sda7 Feb 13 19:51:43.845174 extend-filesystems[1449]: Found sda9 Feb 13 19:51:43.845174 extend-filesystems[1449]: Checking size of /dev/sda9 Feb 13 19:51:44.055307 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 19:51:44.055405 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 19:51:44.055451 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1341) Feb 13 19:51:44.055658 extend-filesystems[1449]: Resized partition /dev/sda9 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:07:00 UTC 2025 (1): Starting Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: ---------------------------------------------------- Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: corporation. Support and training for ntp-4 are Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: available at https://www.nwtime.org/support Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: ---------------------------------------------------- Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: proto: precision = 0.073 usec (-24) Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: basedate set to 2025-02-01 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: gps base set to 2025-02-02 (week 2352) Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: Listen normally on 3 eth0 10.128.0.69:123 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: Listen normally on 4 lo [::1]:123 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: bind(21) AF_INET6 fe80::4001:aff:fe80:45%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:45%2#123 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: failed to init interface for address fe80::4001:aff:fe80:45%2 Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: Listening on routing socket on fd #21 for interface updates Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:51:44.077932 ntpd[1453]: 13 Feb 19:51:43 ntpd[1453]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:51:43.859203 dbus-daemon[1447]: [system] SELinux support is enabled Feb 13 19:51:43.848142 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:51:44.091621 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:51:44.091621 extend-filesystems[1475]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 19:51:44.091621 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 19:51:44.091621 extend-filesystems[1475]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 19:51:43.863330 ntpd[1453]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:07:00 UTC 2025 (1): Starting Feb 13 19:51:43.882327 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:51:44.137382 update_engine[1465]: I20250213 19:51:44.019533 1465 main.cc:92] Flatcar Update Engine starting Feb 13 19:51:44.137382 update_engine[1465]: I20250213 19:51:44.057209 1465 update_check_scheduler.cc:74] Next update check in 6m8s Feb 13 19:51:44.144580 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Feb 13 19:51:43.863364 ntpd[1453]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:51:43.932316 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:51:44.152125 jq[1467]: true Feb 13 19:51:43.863382 ntpd[1453]: ---------------------------------------------------- Feb 13 19:51:43.932648 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:51:43.863397 ntpd[1453]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:51:43.933233 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:51:43.863410 ntpd[1453]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:51:43.933488 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:51:44.154964 jq[1481]: true Feb 13 19:51:43.863423 ntpd[1453]: corporation. Support and training for ntp-4 are Feb 13 19:51:43.980528 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:51:43.863437 ntpd[1453]: available at https://www.nwtime.org/support Feb 13 19:51:43.981890 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:51:43.863452 ntpd[1453]: ---------------------------------------------------- Feb 13 19:51:43.992580 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:51:43.868673 dbus-daemon[1447]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1395 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:51:43.994030 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:51:43.871664 ntpd[1453]: proto: precision = 0.073 usec (-24) Feb 13 19:51:44.012714 systemd-logind[1463]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 19:51:43.875201 ntpd[1453]: basedate set to 2025-02-01 Feb 13 19:51:44.016501 systemd-logind[1463]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 19:51:43.875229 ntpd[1453]: gps base set to 2025-02-02 (week 2352) Feb 13 19:51:44.016558 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:51:43.885849 ntpd[1453]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:51:44.019050 systemd-logind[1463]: New seat seat0. Feb 13 19:51:43.885927 ntpd[1453]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:51:44.031987 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:51:43.886973 ntpd[1453]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:51:44.076394 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:51:43.887041 ntpd[1453]: Listen normally on 3 eth0 10.128.0.69:123 Feb 13 19:51:44.151571 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:51:43.887101 ntpd[1453]: Listen normally on 4 lo [::1]:123 Feb 13 19:51:43.887166 ntpd[1453]: bind(21) AF_INET6 fe80::4001:aff:fe80:45%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:51:43.887196 ntpd[1453]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:45%2#123 Feb 13 19:51:43.887216 ntpd[1453]: failed to init interface for address fe80::4001:aff:fe80:45%2 Feb 13 19:51:43.887264 ntpd[1453]: Listening on routing socket on fd #21 for interface updates Feb 13 19:51:43.898221 ntpd[1453]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:51:43.898263 ntpd[1453]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:51:44.097590 dbus-daemon[1447]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:51:44.174686 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:51:44.190215 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:51:44.190983 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:51:44.191610 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:51:44.217926 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:51:44.225647 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:51:44.225974 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:51:44.251204 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:51:44.300390 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:51:44.301396 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:51:44.327869 systemd[1]: Starting sshkeys.service... Feb 13 19:51:44.392317 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:51:44.413030 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:51:44.528779 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:51:44.533995 locksmithd[1508]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:51:44.538384 dbus-daemon[1447]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:51:44.539001 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:51:44.549888 dbus-daemon[1447]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1501 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:51:44.561262 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:51:44.564717 coreos-metadata[1516]: Feb 13 19:51:44.564 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 19:51:44.568296 coreos-metadata[1516]: Feb 13 19:51:44.568 INFO Fetch failed with 404: resource not found Feb 13 19:51:44.568296 coreos-metadata[1516]: Feb 13 19:51:44.568 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 19:51:44.569702 coreos-metadata[1516]: Feb 13 19:51:44.569 INFO Fetch successful Feb 13 19:51:44.569702 coreos-metadata[1516]: Feb 13 19:51:44.569 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 19:51:44.571008 coreos-metadata[1516]: Feb 13 19:51:44.570 INFO Fetch failed with 404: resource not found Feb 13 19:51:44.571008 coreos-metadata[1516]: Feb 13 19:51:44.570 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 19:51:44.572773 coreos-metadata[1516]: Feb 13 19:51:44.572 INFO Fetch failed with 404: resource not found Feb 13 19:51:44.572773 coreos-metadata[1516]: Feb 13 19:51:44.572 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 19:51:44.576761 coreos-metadata[1516]: Feb 13 19:51:44.575 INFO Fetch successful Feb 13 19:51:44.581465 unknown[1516]: wrote ssh authorized keys file for user: core Feb 13 19:51:44.611824 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:51:44.629629 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:51:44.631551 polkitd[1532]: Started polkitd version 121 Feb 13 19:51:44.644339 polkitd[1532]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:51:44.644653 polkitd[1532]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:51:44.645042 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:51:44.645724 polkitd[1532]: Finished loading, compiling and executing 2 rules Feb 13 19:51:44.650273 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:51:44.650589 dbus-daemon[1447]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:51:44.655215 polkitd[1532]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:51:44.664901 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:51:44.665229 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:51:44.677298 systemd[1]: Finished sshkeys.service. Feb 13 19:51:44.687667 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:51:44.708721 systemd-hostnamed[1501]: Hostname set to (transient) Feb 13 19:51:44.709222 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:51:44.710196 systemd-resolved[1318]: System hostname changed to 'ci-4152-2-1-f15ac478b05ae8fff206.c.flatcar-212911.internal'. Feb 13 19:51:44.724874 containerd[1482]: time="2025-02-13T19:51:44.724561256Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:51:44.746764 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:51:44.763246 containerd[1482]: time="2025-02-13T19:51:44.762964566Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:44.765914 containerd[1482]: time="2025-02-13T19:51:44.765671508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:44.766319 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.765726215Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.766486894Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.766711158Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.767141513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.767272260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.767294938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.767566804Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.767591833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.767614128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.767630742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.767879516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:44.768512 containerd[1482]: time="2025-02-13T19:51:44.768237327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:44.769121 containerd[1482]: time="2025-02-13T19:51:44.768447499Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:44.769121 containerd[1482]: time="2025-02-13T19:51:44.768472618Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:51:44.769121 containerd[1482]: time="2025-02-13T19:51:44.768590976Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:51:44.769121 containerd[1482]: time="2025-02-13T19:51:44.768658550Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:51:44.774489 containerd[1482]: time="2025-02-13T19:51:44.774448149Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.774650408Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.774765221Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.774797120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.774821434Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775033644Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775403753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775604175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775633405Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775670919Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775694745Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775717238Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775763840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775785688Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:51:44.776757 containerd[1482]: time="2025-02-13T19:51:44.775807702Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.775829739Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.775854874Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.775874981Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.775905023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.775927030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.775946902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.775966851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.775987514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.776006942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.776026336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.776065633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.776089157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.776113347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777399 containerd[1482]: time="2025-02-13T19:51:44.776133519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776154178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776174082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776196132Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776231101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776253411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776272026Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776345741Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776379607Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776397763Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776419557Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776436398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776455244Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776472305Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:51:44.777988 containerd[1482]: time="2025-02-13T19:51:44.776491025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:51:44.778978 containerd[1482]: time="2025-02-13T19:51:44.778880613Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:51:44.779301 containerd[1482]: time="2025-02-13T19:51:44.779275800Z" level=info msg="Connect containerd service" Feb 13 19:51:44.779479 containerd[1482]: time="2025-02-13T19:51:44.779455349Z" level=info msg="using legacy CRI server" Feb 13 19:51:44.779579 containerd[1482]: time="2025-02-13T19:51:44.779559575Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:51:44.779868 containerd[1482]: time="2025-02-13T19:51:44.779842405Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:51:44.780877 containerd[1482]: time="2025-02-13T19:51:44.780840586Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:51:44.781138 containerd[1482]: time="2025-02-13T19:51:44.781074722Z" level=info msg="Start subscribing containerd event" Feb 13 19:51:44.781207 containerd[1482]: time="2025-02-13T19:51:44.781161924Z" level=info msg="Start recovering state" Feb 13 19:51:44.781271 containerd[1482]: time="2025-02-13T19:51:44.781253541Z" level=info msg="Start event monitor" Feb 13 19:51:44.781319 containerd[1482]: time="2025-02-13T19:51:44.781274399Z" level=info msg="Start snapshots syncer" Feb 13 19:51:44.781319 containerd[1482]: time="2025-02-13T19:51:44.781289572Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:51:44.781319 containerd[1482]: time="2025-02-13T19:51:44.781302423Z" level=info msg="Start streaming server" Feb 13 19:51:44.781822 containerd[1482]: time="2025-02-13T19:51:44.781789293Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:51:44.783758 containerd[1482]: time="2025-02-13T19:51:44.781939315Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:51:44.783758 containerd[1482]: time="2025-02-13T19:51:44.782055997Z" level=info msg="containerd successfully booted in 0.058551s" Feb 13 19:51:44.786296 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:51:44.797243 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:51:44.806367 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:51:44.864006 ntpd[1453]: bind(24) AF_INET6 fe80::4001:aff:fe80:45%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:51:44.864476 ntpd[1453]: 13 Feb 19:51:44 ntpd[1453]: bind(24) AF_INET6 fe80::4001:aff:fe80:45%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:51:44.864476 ntpd[1453]: 13 Feb 19:51:44 ntpd[1453]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:45%2#123 Feb 13 19:51:44.864476 ntpd[1453]: 13 Feb 19:51:44 ntpd[1453]: failed to init interface for address fe80::4001:aff:fe80:45%2 Feb 13 19:51:44.864070 ntpd[1453]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:45%2#123 Feb 13 19:51:44.864091 ntpd[1453]: failed to init interface for address fe80::4001:aff:fe80:45%2 Feb 13 19:51:44.983025 systemd-networkd[1395]: eth0: Gained IPv6LL Feb 13 19:51:44.987059 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:51:44.999854 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:51:45.018090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:45.038219 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:51:45.056888 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 19:51:45.063851 init.sh[1563]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 19:51:45.067785 init.sh[1563]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 19:51:45.067785 init.sh[1563]: + /usr/bin/google_instance_setup Feb 13 19:51:45.076274 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:51:45.612217 instance-setup[1569]: INFO Running google_set_multiqueue. Feb 13 19:51:45.633895 instance-setup[1569]: INFO Set channels for eth0 to 2. Feb 13 19:51:45.638568 instance-setup[1569]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 19:51:45.640487 instance-setup[1569]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 19:51:45.641051 instance-setup[1569]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 19:51:45.643300 instance-setup[1569]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 19:51:45.643376 instance-setup[1569]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 19:51:45.645193 instance-setup[1569]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 19:51:45.645507 instance-setup[1569]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 19:51:45.648384 instance-setup[1569]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 19:51:45.656913 instance-setup[1569]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 19:51:45.661418 instance-setup[1569]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 19:51:45.663854 instance-setup[1569]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 19:51:45.663910 instance-setup[1569]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 19:51:45.687971 init.sh[1563]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 19:51:45.852874 startup-script[1602]: INFO Starting startup scripts. Feb 13 19:51:45.859223 startup-script[1602]: INFO No startup scripts found in metadata. Feb 13 19:51:45.859325 startup-script[1602]: INFO Finished running startup scripts. Feb 13 19:51:45.884879 init.sh[1563]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 19:51:45.884879 init.sh[1563]: + daemon_pids=() Feb 13 19:51:45.884879 init.sh[1563]: + for d in accounts clock_skew network Feb 13 19:51:45.885091 init.sh[1563]: + daemon_pids+=($!) Feb 13 19:51:45.885091 init.sh[1563]: + for d in accounts clock_skew network Feb 13 19:51:45.885622 init.sh[1563]: + daemon_pids+=($!) Feb 13 19:51:45.885622 init.sh[1563]: + for d in accounts clock_skew network Feb 13 19:51:45.885818 init.sh[1605]: + /usr/bin/google_accounts_daemon Feb 13 19:51:45.886555 init.sh[1563]: + daemon_pids+=($!) Feb 13 19:51:45.886555 init.sh[1563]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 19:51:45.886555 init.sh[1563]: + /usr/bin/systemd-notify --ready Feb 13 19:51:45.886688 init.sh[1606]: + /usr/bin/google_clock_skew_daemon Feb 13 19:51:45.887378 init.sh[1607]: + /usr/bin/google_network_daemon Feb 13 19:51:45.909386 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 19:51:45.922382 init.sh[1563]: + wait -n 1605 1606 1607 Feb 13 19:51:46.244785 google-networking[1607]: INFO Starting Google Networking daemon. Feb 13 19:51:46.256320 google-clock-skew[1606]: INFO Starting Google Clock Skew daemon. Feb 13 19:51:46.272934 google-clock-skew[1606]: INFO Clock drift token has changed: 0. Feb 13 19:51:46.353023 groupadd[1617]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 19:51:46.356996 groupadd[1617]: group added to /etc/gshadow: name=google-sudoers Feb 13 19:51:46.413162 groupadd[1617]: new group: name=google-sudoers, GID=1000 Feb 13 19:51:46.447712 google-accounts[1605]: INFO Starting Google Accounts daemon. Feb 13 19:51:46.461231 google-accounts[1605]: WARNING OS Login not installed. Feb 13 19:51:46.463786 google-accounts[1605]: INFO Creating a new user account for 0. Feb 13 19:51:46.469783 init.sh[1625]: useradd: invalid user name '0': use --badname to ignore Feb 13 19:51:46.470601 google-accounts[1605]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 19:51:46.575680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:46.588003 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:51:46.598473 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:46.599858 systemd[1]: Startup finished in 1.061s (kernel) + 9.029s (initrd) + 8.876s (userspace) = 18.968s. Feb 13 19:51:47.000611 systemd-resolved[1318]: Clock change detected. Flushing caches. Feb 13 19:51:47.001772 google-clock-skew[1606]: INFO Synced system time with hardware clock. Feb 13 19:51:47.578591 kubelet[1632]: E0213 19:51:47.578520 1632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:47.581802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:47.582060 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:47.582541 systemd[1]: kubelet.service: Consumed 1.261s CPU time. Feb 13 19:51:47.911533 ntpd[1453]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:45%2]:123 Feb 13 19:51:47.912033 ntpd[1453]: 13 Feb 19:51:47 ntpd[1453]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:45%2]:123 Feb 13 19:51:51.860418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:51:51.865727 systemd[1]: Started sshd@0-10.128.0.69:22-139.178.89.65:56784.service - OpenSSH per-connection server daemon (139.178.89.65:56784). Feb 13 19:51:52.185856 sshd[1645]: Accepted publickey for core from 139.178.89.65 port 56784 ssh2: RSA SHA256:kqLFdnc4GUa+fVBnzdXd2t9fY8wvn6J3Jul4mo43txI Feb 13 19:51:52.187884 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:52.199297 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:51:52.204703 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:51:52.208434 systemd-logind[1463]: New session 1 of user core. Feb 13 19:51:52.228915 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:51:52.235830 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:51:52.259807 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:51:52.397031 systemd[1649]: Queued start job for default target default.target. Feb 13 19:51:52.403908 systemd[1649]: Created slice app.slice - User Application Slice. Feb 13 19:51:52.403963 systemd[1649]: Reached target paths.target - Paths. Feb 13 19:51:52.403991 systemd[1649]: Reached target timers.target - Timers. Feb 13 19:51:52.405884 systemd[1649]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:51:52.421531 systemd[1649]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:51:52.421739 systemd[1649]: Reached target sockets.target - Sockets. Feb 13 19:51:52.421765 systemd[1649]: Reached target basic.target - Basic System. Feb 13 19:51:52.421837 systemd[1649]: Reached target default.target - Main User Target. Feb 13 19:51:52.421898 systemd[1649]: Startup finished in 153ms. Feb 13 19:51:52.422156 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:51:52.431577 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:51:52.662762 systemd[1]: Started sshd@1-10.128.0.69:22-139.178.89.65:56790.service - OpenSSH per-connection server daemon (139.178.89.65:56790). Feb 13 19:51:52.957362 sshd[1660]: Accepted publickey for core from 139.178.89.65 port 56790 ssh2: RSA SHA256:kqLFdnc4GUa+fVBnzdXd2t9fY8wvn6J3Jul4mo43txI Feb 13 19:51:52.959250 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:52.965812 systemd-logind[1463]: New session 2 of user core. Feb 13 19:51:52.975606 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:51:53.171120 sshd[1662]: Connection closed by 139.178.89.65 port 56790 Feb 13 19:51:53.172011 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:53.176998 systemd[1]: sshd@1-10.128.0.69:22-139.178.89.65:56790.service: Deactivated successfully. Feb 13 19:51:53.179255 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:51:53.180258 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:51:53.181875 systemd-logind[1463]: Removed session 2. Feb 13 19:51:53.225713 systemd[1]: Started sshd@2-10.128.0.69:22-139.178.89.65:56796.service - OpenSSH per-connection server daemon (139.178.89.65:56796). Feb 13 19:51:53.526223 sshd[1667]: Accepted publickey for core from 139.178.89.65 port 56796 ssh2: RSA SHA256:kqLFdnc4GUa+fVBnzdXd2t9fY8wvn6J3Jul4mo43txI Feb 13 19:51:53.527979 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:53.534491 systemd-logind[1463]: New session 3 of user core. Feb 13 19:51:53.540598 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:51:53.732809 sshd[1669]: Connection closed by 139.178.89.65 port 56796 Feb 13 19:51:53.733731 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:53.738261 systemd[1]: sshd@2-10.128.0.69:22-139.178.89.65:56796.service: Deactivated successfully. Feb 13 19:51:53.740524 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:51:53.742424 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:51:53.744093 systemd-logind[1463]: Removed session 3. Feb 13 19:51:53.796742 systemd[1]: Started sshd@3-10.128.0.69:22-139.178.89.65:56804.service - OpenSSH per-connection server daemon (139.178.89.65:56804). Feb 13 19:51:54.093320 sshd[1674]: Accepted publickey for core from 139.178.89.65 port 56804 ssh2: RSA SHA256:kqLFdnc4GUa+fVBnzdXd2t9fY8wvn6J3Jul4mo43txI Feb 13 19:51:54.095092 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:54.100530 systemd-logind[1463]: New session 4 of user core. Feb 13 19:51:54.110583 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:51:54.308217 sshd[1676]: Connection closed by 139.178.89.65 port 56804 Feb 13 19:51:54.309164 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:54.314482 systemd[1]: sshd@3-10.128.0.69:22-139.178.89.65:56804.service: Deactivated successfully. Feb 13 19:51:54.316760 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:51:54.317808 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:51:54.319327 systemd-logind[1463]: Removed session 4. Feb 13 19:51:54.363726 systemd[1]: Started sshd@4-10.128.0.69:22-139.178.89.65:56808.service - OpenSSH per-connection server daemon (139.178.89.65:56808). Feb 13 19:51:54.667726 sshd[1681]: Accepted publickey for core from 139.178.89.65 port 56808 ssh2: RSA SHA256:kqLFdnc4GUa+fVBnzdXd2t9fY8wvn6J3Jul4mo43txI Feb 13 19:51:54.669559 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:54.676009 systemd-logind[1463]: New session 5 of user core. Feb 13 19:51:54.686591 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:51:54.862248 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:51:54.862818 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:54.879348 sudo[1684]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:54.922176 sshd[1683]: Connection closed by 139.178.89.65 port 56808 Feb 13 19:51:54.923921 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:54.928570 systemd[1]: sshd@4-10.128.0.69:22-139.178.89.65:56808.service: Deactivated successfully. Feb 13 19:51:54.930869 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:51:54.932861 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:51:54.934537 systemd-logind[1463]: Removed session 5. Feb 13 19:51:54.979072 systemd[1]: Started sshd@5-10.128.0.69:22-139.178.89.65:49950.service - OpenSSH per-connection server daemon (139.178.89.65:49950). Feb 13 19:51:55.281335 sshd[1689]: Accepted publickey for core from 139.178.89.65 port 49950 ssh2: RSA SHA256:kqLFdnc4GUa+fVBnzdXd2t9fY8wvn6J3Jul4mo43txI Feb 13 19:51:55.283159 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:55.288574 systemd-logind[1463]: New session 6 of user core. Feb 13 19:51:55.300572 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:51:55.462440 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:51:55.462958 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:55.468325 sudo[1693]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:55.482735 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:51:55.483239 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:55.500825 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:51:55.557584 augenrules[1715]: No rules Feb 13 19:51:55.559689 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:51:55.560005 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:51:55.561643 sudo[1692]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:55.604838 sshd[1691]: Connection closed by 139.178.89.65 port 49950 Feb 13 19:51:55.605784 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:55.611177 systemd[1]: sshd@5-10.128.0.69:22-139.178.89.65:49950.service: Deactivated successfully. Feb 13 19:51:55.613503 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:51:55.614561 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:51:55.615979 systemd-logind[1463]: Removed session 6. Feb 13 19:51:55.661737 systemd[1]: Started sshd@6-10.128.0.69:22-139.178.89.65:49960.service - OpenSSH per-connection server daemon (139.178.89.65:49960). Feb 13 19:51:55.969419 sshd[1723]: Accepted publickey for core from 139.178.89.65 port 49960 ssh2: RSA SHA256:kqLFdnc4GUa+fVBnzdXd2t9fY8wvn6J3Jul4mo43txI Feb 13 19:51:55.971271 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:55.977362 systemd-logind[1463]: New session 7 of user core. Feb 13 19:51:55.984533 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:51:56.150568 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:51:56.151079 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:57.107558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:57.108117 systemd[1]: kubelet.service: Consumed 1.261s CPU time. Feb 13 19:51:57.122812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:57.161393 systemd[1]: Reloading requested from client PID 1764 ('systemctl') (unit session-7.scope)... Feb 13 19:51:57.161584 systemd[1]: Reloading... Feb 13 19:51:57.364346 zram_generator::config[1805]: No configuration found. Feb 13 19:51:57.501768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:57.603072 systemd[1]: Reloading finished in 440 ms. Feb 13 19:51:57.671915 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:51:57.672050 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:51:57.672527 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:57.679743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:57.967672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:57.984021 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:58.038076 kubelet[1855]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:58.038076 kubelet[1855]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:58.038574 kubelet[1855]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:58.040076 kubelet[1855]: I0213 19:51:58.039993 1855 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:58.713102 kubelet[1855]: I0213 19:51:58.713056 1855 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:51:58.714498 kubelet[1855]: I0213 19:51:58.713305 1855 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:58.714498 kubelet[1855]: I0213 19:51:58.713593 1855 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:51:58.738784 kubelet[1855]: I0213 19:51:58.738105 1855 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:58.756182 kubelet[1855]: I0213 19:51:58.756124 1855 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:58.756619 kubelet[1855]: I0213 19:51:58.756541 1855 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:58.756868 kubelet[1855]: I0213 19:51:58.756601 1855 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.128.0.69","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:51:58.758019 kubelet[1855]: I0213 19:51:58.757982 1855 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:58.758019 kubelet[1855]: I0213 19:51:58.758020 1855 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:51:58.758237 kubelet[1855]: I0213 19:51:58.758206 1855 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:58.759748 kubelet[1855]: I0213 19:51:58.759619 1855 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:51:58.759748 kubelet[1855]: I0213 19:51:58.759651 1855 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:58.759748 kubelet[1855]: I0213 19:51:58.759686 1855 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:51:58.759748 kubelet[1855]: I0213 19:51:58.759712 1855 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:58.761388 kubelet[1855]: E0213 19:51:58.761359 1855 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:58.761590 kubelet[1855]: E0213 19:51:58.761571 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:58.766142 kubelet[1855]: I0213 19:51:58.766061 1855 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:51:58.768514 kubelet[1855]: I0213 19:51:58.768229 1855 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:58.768514 kubelet[1855]: W0213 19:51:58.768357 1855 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:51:58.772169 kubelet[1855]: I0213 19:51:58.771534 1855 server.go:1264] "Started kubelet" Feb 13 19:51:58.772169 kubelet[1855]: W0213 19:51:58.771624 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.128.0.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:51:58.772169 kubelet[1855]: E0213 19:51:58.771666 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.69" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:51:58.772169 kubelet[1855]: W0213 19:51:58.771798 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:51:58.772169 kubelet[1855]: E0213 19:51:58.771832 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:51:58.772645 kubelet[1855]: I0213 19:51:58.772542 1855 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:58.773210 kubelet[1855]: I0213 19:51:58.773186 1855 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:58.776114 kubelet[1855]: I0213 19:51:58.776060 1855 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:58.786321 kubelet[1855]: I0213 19:51:58.784959 1855 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:58.786940 kubelet[1855]: I0213 19:51:58.786908 1855 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:51:58.792800 kubelet[1855]: I0213 19:51:58.792742 1855 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:51:58.794858 kubelet[1855]: I0213 19:51:58.794816 1855 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:58.795671 kubelet[1855]: I0213 19:51:58.795260 1855 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:58.799104 kubelet[1855]: I0213 19:51:58.798999 1855 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:58.799237 kubelet[1855]: I0213 19:51:58.799204 1855 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:58.800114 kubelet[1855]: E0213 19:51:58.800078 1855 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:58.802019 kubelet[1855]: I0213 19:51:58.801979 1855 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:58.820347 kubelet[1855]: E0213 19:51:58.820174 1855 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.128.0.69\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:51:58.820758 kubelet[1855]: W0213 19:51:58.820734 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:51:58.820927 kubelet[1855]: E0213 19:51:58.820912 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:51:58.821387 kubelet[1855]: E0213 19:51:58.821151 1855 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.128.0.69.1823dc7eeb944c3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.128.0.69,UID:10.128.0.69,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.128.0.69,},FirstTimestamp:2025-02-13 19:51:58.771498046 +0000 UTC m=+0.782860693,LastTimestamp:2025-02-13 19:51:58.771498046 +0000 UTC m=+0.782860693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.128.0.69,}" Feb 13 19:51:58.835048 kubelet[1855]: I0213 19:51:58.835020 1855 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:51:58.835253 kubelet[1855]: I0213 19:51:58.835240 1855 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:58.835371 kubelet[1855]: I0213 19:51:58.835359 1855 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:58.842804 kubelet[1855]: I0213 19:51:58.841784 1855 policy_none.go:49] "None policy: Start" Feb 13 19:51:58.843752 kubelet[1855]: I0213 19:51:58.843726 1855 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:51:58.843964 kubelet[1855]: I0213 19:51:58.843947 1855 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:58.854907 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:51:58.866994 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:51:58.878627 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:51:58.882207 kubelet[1855]: I0213 19:51:58.881803 1855 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:58.882207 kubelet[1855]: I0213 19:51:58.882148 1855 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:58.882433 kubelet[1855]: I0213 19:51:58.882348 1855 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:58.885469 kubelet[1855]: E0213 19:51:58.885422 1855 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.69\" not found" Feb 13 19:51:58.895929 kubelet[1855]: I0213 19:51:58.895882 1855 kubelet_node_status.go:73] "Attempting to register node" node="10.128.0.69" Feb 13 19:51:58.903730 kubelet[1855]: I0213 19:51:58.903480 1855 kubelet_node_status.go:76] "Successfully registered node" node="10.128.0.69" Feb 13 19:51:58.914005 kubelet[1855]: I0213 19:51:58.913955 1855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:58.916149 kubelet[1855]: I0213 19:51:58.916104 1855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:58.916149 kubelet[1855]: I0213 19:51:58.916147 1855 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:51:58.916653 kubelet[1855]: I0213 19:51:58.916172 1855 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:51:58.916653 kubelet[1855]: E0213 19:51:58.916243 1855 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:51:58.953536 kubelet[1855]: E0213 19:51:58.953470 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.053621 kubelet[1855]: E0213 19:51:59.053562 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.087677 sudo[1726]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:59.130973 sshd[1725]: Connection closed by 139.178.89.65 port 49960 Feb 13 19:51:59.131843 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:59.136785 systemd[1]: sshd@6-10.128.0.69:22-139.178.89.65:49960.service: Deactivated successfully. Feb 13 19:51:59.139147 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:51:59.141643 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:51:59.143375 systemd-logind[1463]: Removed session 7. Feb 13 19:51:59.154427 kubelet[1855]: E0213 19:51:59.154362 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.255624 kubelet[1855]: E0213 19:51:59.255473 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.356623 kubelet[1855]: E0213 19:51:59.356447 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.457416 kubelet[1855]: E0213 19:51:59.457339 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.558353 kubelet[1855]: E0213 19:51:59.558262 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.659324 kubelet[1855]: E0213 19:51:59.659137 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.716206 kubelet[1855]: I0213 19:51:59.716129 1855 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:51:59.716422 kubelet[1855]: W0213 19:51:59.716399 1855 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:51:59.759749 kubelet[1855]: E0213 19:51:59.759684 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.761968 kubelet[1855]: E0213 19:51:59.761913 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:59.860646 kubelet[1855]: E0213 19:51:59.860595 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:51:59.961381 kubelet[1855]: E0213 19:51:59.961203 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:52:00.061484 kubelet[1855]: E0213 19:52:00.061405 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:52:00.162241 kubelet[1855]: E0213 19:52:00.162186 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:52:00.262852 kubelet[1855]: E0213 19:52:00.262788 1855 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.128.0.69\" not found" Feb 13 19:52:00.364594 kubelet[1855]: I0213 19:52:00.364531 1855 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:52:00.365016 containerd[1482]: time="2025-02-13T19:52:00.364957824Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:52:00.365697 kubelet[1855]: I0213 19:52:00.365217 1855 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:52:00.762451 kubelet[1855]: E0213 19:52:00.762373 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:00.762451 kubelet[1855]: I0213 19:52:00.762433 1855 apiserver.go:52] "Watching apiserver" Feb 13 19:52:00.772637 kubelet[1855]: I0213 19:52:00.772469 1855 topology_manager.go:215] "Topology Admit Handler" podUID="2a24be58-0d82-4ab0-808b-6b692796d858" podNamespace="calico-system" podName="calico-node-bfchc" Feb 13 19:52:00.772830 kubelet[1855]: I0213 19:52:00.772665 1855 topology_manager.go:215] "Topology Admit Handler" podUID="683959d7-d153-4d27-88df-3603d9992a77" podNamespace="kube-system" podName="kube-proxy-5tr5s" Feb 13 19:52:00.772830 kubelet[1855]: I0213 19:52:00.772790 1855 topology_manager.go:215] "Topology Admit Handler" podUID="482935c4-4939-47ca-9a60-130d52de95d3" podNamespace="calico-system" podName="csi-node-driver-76gzr" Feb 13 19:52:00.773770 kubelet[1855]: E0213 19:52:00.772993 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:00.785727 systemd[1]: Created slice kubepods-besteffort-pod2a24be58_0d82_4ab0_808b_6b692796d858.slice - libcontainer container kubepods-besteffort-pod2a24be58_0d82_4ab0_808b_6b692796d858.slice. Feb 13 19:52:00.802131 kubelet[1855]: I0213 19:52:00.801029 1855 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:52:00.805977 kubelet[1855]: I0213 19:52:00.805932 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-cni-log-dir\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.806299 kubelet[1855]: I0213 19:52:00.806248 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/482935c4-4939-47ca-9a60-130d52de95d3-registration-dir\") pod \"csi-node-driver-76gzr\" (UID: \"482935c4-4939-47ca-9a60-130d52de95d3\") " pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:00.806501 kubelet[1855]: I0213 19:52:00.806479 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/683959d7-d153-4d27-88df-3603d9992a77-kube-proxy\") pod \"kube-proxy-5tr5s\" (UID: \"683959d7-d153-4d27-88df-3603d9992a77\") " pod="kube-system/kube-proxy-5tr5s" Feb 13 19:52:00.806653 kubelet[1855]: I0213 19:52:00.806632 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/683959d7-d153-4d27-88df-3603d9992a77-lib-modules\") pod \"kube-proxy-5tr5s\" (UID: \"683959d7-d153-4d27-88df-3603d9992a77\") " pod="kube-system/kube-proxy-5tr5s" Feb 13 19:52:00.806919 kubelet[1855]: I0213 19:52:00.806884 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js566\" (UniqueName: \"kubernetes.io/projected/683959d7-d153-4d27-88df-3603d9992a77-kube-api-access-js566\") pod \"kube-proxy-5tr5s\" (UID: \"683959d7-d153-4d27-88df-3603d9992a77\") " pod="kube-system/kube-proxy-5tr5s" Feb 13 19:52:00.807104 kubelet[1855]: I0213 19:52:00.807083 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-lib-modules\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.807300 kubelet[1855]: I0213 19:52:00.807260 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-policysync\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.807453 kubelet[1855]: I0213 19:52:00.807436 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-cni-bin-dir\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.807643 kubelet[1855]: I0213 19:52:00.807593 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-cni-net-dir\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.807893 kubelet[1855]: I0213 19:52:00.807625 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b5mg\" (UniqueName: \"kubernetes.io/projected/2a24be58-0d82-4ab0-808b-6b692796d858-kube-api-access-4b5mg\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.807893 kubelet[1855]: I0213 19:52:00.807787 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/482935c4-4939-47ca-9a60-130d52de95d3-socket-dir\") pod \"csi-node-driver-76gzr\" (UID: \"482935c4-4939-47ca-9a60-130d52de95d3\") " pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:00.807893 kubelet[1855]: I0213 19:52:00.807854 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2a24be58-0d82-4ab0-808b-6b692796d858-node-certs\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.808307 kubelet[1855]: I0213 19:52:00.808122 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-var-run-calico\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.808307 kubelet[1855]: I0213 19:52:00.808205 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-var-lib-calico\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.808657 kubelet[1855]: I0213 19:52:00.808459 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/683959d7-d153-4d27-88df-3603d9992a77-xtables-lock\") pod \"kube-proxy-5tr5s\" (UID: \"683959d7-d153-4d27-88df-3603d9992a77\") " pod="kube-system/kube-proxy-5tr5s" Feb 13 19:52:00.808819 kubelet[1855]: I0213 19:52:00.808725 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzrz5\" (UniqueName: \"kubernetes.io/projected/482935c4-4939-47ca-9a60-130d52de95d3-kube-api-access-rzrz5\") pod \"csi-node-driver-76gzr\" (UID: \"482935c4-4939-47ca-9a60-130d52de95d3\") " pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:00.809053 kubelet[1855]: I0213 19:52:00.808918 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-xtables-lock\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.809053 kubelet[1855]: I0213 19:52:00.809011 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a24be58-0d82-4ab0-808b-6b692796d858-tigera-ca-bundle\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.809703 kubelet[1855]: I0213 19:52:00.809414 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2a24be58-0d82-4ab0-808b-6b692796d858-flexvol-driver-host\") pod \"calico-node-bfchc\" (UID: \"2a24be58-0d82-4ab0-808b-6b692796d858\") " pod="calico-system/calico-node-bfchc" Feb 13 19:52:00.809703 kubelet[1855]: I0213 19:52:00.809485 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/482935c4-4939-47ca-9a60-130d52de95d3-varrun\") pod \"csi-node-driver-76gzr\" (UID: \"482935c4-4939-47ca-9a60-130d52de95d3\") " pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:00.809703 kubelet[1855]: I0213 19:52:00.809627 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/482935c4-4939-47ca-9a60-130d52de95d3-kubelet-dir\") pod \"csi-node-driver-76gzr\" (UID: \"482935c4-4939-47ca-9a60-130d52de95d3\") " pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:00.810881 systemd[1]: Created slice kubepods-besteffort-pod683959d7_d153_4d27_88df_3603d9992a77.slice - libcontainer container kubepods-besteffort-pod683959d7_d153_4d27_88df_3603d9992a77.slice. Feb 13 19:52:00.914138 kubelet[1855]: E0213 19:52:00.914101 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:00.914379 kubelet[1855]: W0213 19:52:00.914340 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:00.914569 kubelet[1855]: E0213 19:52:00.914543 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:00.914995 kubelet[1855]: E0213 19:52:00.914972 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:00.915120 kubelet[1855]: W0213 19:52:00.915103 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:00.915229 kubelet[1855]: E0213 19:52:00.915211 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:00.915738 kubelet[1855]: E0213 19:52:00.915717 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:00.915902 kubelet[1855]: W0213 19:52:00.915838 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:00.915902 kubelet[1855]: E0213 19:52:00.915865 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:00.923973 kubelet[1855]: E0213 19:52:00.923750 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:00.923973 kubelet[1855]: W0213 19:52:00.923780 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:00.923973 kubelet[1855]: E0213 19:52:00.923826 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:00.939443 kubelet[1855]: E0213 19:52:00.939149 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:00.939443 kubelet[1855]: W0213 19:52:00.939181 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:00.939443 kubelet[1855]: E0213 19:52:00.939214 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:00.940727 kubelet[1855]: E0213 19:52:00.940625 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:00.940727 kubelet[1855]: W0213 19:52:00.940650 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:00.940727 kubelet[1855]: E0213 19:52:00.940677 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:00.941736 kubelet[1855]: E0213 19:52:00.941614 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:00.941736 kubelet[1855]: W0213 19:52:00.941646 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:00.941736 kubelet[1855]: E0213 19:52:00.941671 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:01.107925 containerd[1482]: time="2025-02-13T19:52:01.107648458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bfchc,Uid:2a24be58-0d82-4ab0-808b-6b692796d858,Namespace:calico-system,Attempt:0,}" Feb 13 19:52:01.114718 containerd[1482]: time="2025-02-13T19:52:01.114653302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5tr5s,Uid:683959d7-d153-4d27-88df-3603d9992a77,Namespace:kube-system,Attempt:0,}" Feb 13 19:52:01.763036 kubelet[1855]: E0213 19:52:01.762961 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:01.869465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount489625945.mount: Deactivated successfully. Feb 13 19:52:01.879460 containerd[1482]: time="2025-02-13T19:52:01.879380420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:01.882430 containerd[1482]: time="2025-02-13T19:52:01.882366478Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:01.883602 containerd[1482]: time="2025-02-13T19:52:01.883526808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 19:52:01.885116 containerd[1482]: time="2025-02-13T19:52:01.885056221Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:52:01.886500 containerd[1482]: time="2025-02-13T19:52:01.886415670Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:01.892315 containerd[1482]: time="2025-02-13T19:52:01.890652555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:52:01.897436 containerd[1482]: time="2025-02-13T19:52:01.897371474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 789.567456ms" Feb 13 19:52:01.898522 containerd[1482]: time="2025-02-13T19:52:01.898448794Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 783.672349ms" Feb 13 19:52:02.079116 containerd[1482]: time="2025-02-13T19:52:02.078760558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:02.079116 containerd[1482]: time="2025-02-13T19:52:02.078842427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:02.079116 containerd[1482]: time="2025-02-13T19:52:02.078884244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:02.086829 containerd[1482]: time="2025-02-13T19:52:02.079167896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:02.092454 containerd[1482]: time="2025-02-13T19:52:02.076913432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:02.092454 containerd[1482]: time="2025-02-13T19:52:02.092096714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:02.092454 containerd[1482]: time="2025-02-13T19:52:02.092200894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:02.092454 containerd[1482]: time="2025-02-13T19:52:02.092677223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:02.205591 systemd[1]: Started cri-containerd-5ebc7fdbc25171e04d7118eb0eec8705598bdab33e21bed4bb3ec016306bde77.scope - libcontainer container 5ebc7fdbc25171e04d7118eb0eec8705598bdab33e21bed4bb3ec016306bde77. Feb 13 19:52:02.208230 systemd[1]: Started cri-containerd-d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9.scope - libcontainer container d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9. Feb 13 19:52:02.264223 containerd[1482]: time="2025-02-13T19:52:02.264144352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bfchc,Uid:2a24be58-0d82-4ab0-808b-6b692796d858,Namespace:calico-system,Attempt:0,} returns sandbox id \"d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9\"" Feb 13 19:52:02.264725 containerd[1482]: time="2025-02-13T19:52:02.264685221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5tr5s,Uid:683959d7-d153-4d27-88df-3603d9992a77,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ebc7fdbc25171e04d7118eb0eec8705598bdab33e21bed4bb3ec016306bde77\"" Feb 13 19:52:02.268173 containerd[1482]: time="2025-02-13T19:52:02.268120543Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:52:02.763521 kubelet[1855]: E0213 19:52:02.763456 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:02.919318 kubelet[1855]: E0213 19:52:02.917685 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:03.372049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569678616.mount: Deactivated successfully. Feb 13 19:52:03.764252 kubelet[1855]: E0213 19:52:03.764178 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:03.943773 containerd[1482]: time="2025-02-13T19:52:03.943703037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:03.945638 containerd[1482]: time="2025-02-13T19:52:03.945536563Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29059753" Feb 13 19:52:03.947150 containerd[1482]: time="2025-02-13T19:52:03.947056792Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:03.950538 containerd[1482]: time="2025-02-13T19:52:03.950459283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:03.951824 containerd[1482]: time="2025-02-13T19:52:03.951544874Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.683375751s" Feb 13 19:52:03.951824 containerd[1482]: time="2025-02-13T19:52:03.951599878Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:52:03.953585 containerd[1482]: time="2025-02-13T19:52:03.953297140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:52:03.955816 containerd[1482]: time="2025-02-13T19:52:03.955762669Z" level=info msg="CreateContainer within sandbox \"5ebc7fdbc25171e04d7118eb0eec8705598bdab33e21bed4bb3ec016306bde77\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:52:03.984407 containerd[1482]: time="2025-02-13T19:52:03.984325274Z" level=info msg="CreateContainer within sandbox \"5ebc7fdbc25171e04d7118eb0eec8705598bdab33e21bed4bb3ec016306bde77\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0848ec6cc1539e1426b5ca7ad75413a62400849dcb5baf1e6b0799811935be5f\"" Feb 13 19:52:03.985448 containerd[1482]: time="2025-02-13T19:52:03.985399092Z" level=info msg="StartContainer for \"0848ec6cc1539e1426b5ca7ad75413a62400849dcb5baf1e6b0799811935be5f\"" Feb 13 19:52:04.031802 systemd[1]: Started cri-containerd-0848ec6cc1539e1426b5ca7ad75413a62400849dcb5baf1e6b0799811935be5f.scope - libcontainer container 0848ec6cc1539e1426b5ca7ad75413a62400849dcb5baf1e6b0799811935be5f. Feb 13 19:52:04.078560 containerd[1482]: time="2025-02-13T19:52:04.078054043Z" level=info msg="StartContainer for \"0848ec6cc1539e1426b5ca7ad75413a62400849dcb5baf1e6b0799811935be5f\" returns successfully" Feb 13 19:52:04.764466 kubelet[1855]: E0213 19:52:04.764387 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:04.917420 kubelet[1855]: E0213 19:52:04.917345 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:04.966262 kubelet[1855]: I0213 19:52:04.966048 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tr5s" podStartSLOduration=5.280300887 podStartE2EDuration="6.966023309s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="2025-02-13 19:52:02.267346511 +0000 UTC m=+4.278709148" lastFinishedPulling="2025-02-13 19:52:03.953068934 +0000 UTC m=+5.964431570" observedRunningTime="2025-02-13 19:52:04.96568176 +0000 UTC m=+6.977044406" watchObservedRunningTime="2025-02-13 19:52:04.966023309 +0000 UTC m=+6.977385958" Feb 13 19:52:04.977038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037087319.mount: Deactivated successfully. Feb 13 19:52:05.029768 kubelet[1855]: E0213 19:52:05.029058 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.029768 kubelet[1855]: W0213 19:52:05.029114 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.029768 kubelet[1855]: E0213 19:52:05.029150 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.030540 kubelet[1855]: E0213 19:52:05.030115 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.030540 kubelet[1855]: W0213 19:52:05.030143 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.030540 kubelet[1855]: E0213 19:52:05.030171 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.030772 kubelet[1855]: E0213 19:52:05.030602 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.030772 kubelet[1855]: W0213 19:52:05.030619 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.030772 kubelet[1855]: E0213 19:52:05.030668 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.031699 kubelet[1855]: E0213 19:52:05.031146 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.031699 kubelet[1855]: W0213 19:52:05.031165 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.031699 kubelet[1855]: E0213 19:52:05.031190 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.031699 kubelet[1855]: E0213 19:52:05.031617 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.031699 kubelet[1855]: W0213 19:52:05.031633 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.031699 kubelet[1855]: E0213 19:52:05.031650 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.032910 kubelet[1855]: E0213 19:52:05.032000 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.032910 kubelet[1855]: W0213 19:52:05.032014 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.032910 kubelet[1855]: E0213 19:52:05.032030 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.032910 kubelet[1855]: E0213 19:52:05.032382 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.032910 kubelet[1855]: W0213 19:52:05.032397 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.032910 kubelet[1855]: E0213 19:52:05.032414 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.032910 kubelet[1855]: E0213 19:52:05.032843 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.032910 kubelet[1855]: W0213 19:52:05.032858 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.032910 kubelet[1855]: E0213 19:52:05.032874 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.034058 kubelet[1855]: E0213 19:52:05.033256 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.034058 kubelet[1855]: W0213 19:52:05.033271 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.034058 kubelet[1855]: E0213 19:52:05.033363 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.034058 kubelet[1855]: E0213 19:52:05.033734 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.034058 kubelet[1855]: W0213 19:52:05.033747 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.034058 kubelet[1855]: E0213 19:52:05.033763 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.034800 kubelet[1855]: E0213 19:52:05.034146 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.034800 kubelet[1855]: W0213 19:52:05.034160 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.034800 kubelet[1855]: E0213 19:52:05.034177 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.034800 kubelet[1855]: E0213 19:52:05.034556 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.034800 kubelet[1855]: W0213 19:52:05.034597 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.034800 kubelet[1855]: E0213 19:52:05.034614 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.035451 kubelet[1855]: E0213 19:52:05.035024 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.035451 kubelet[1855]: W0213 19:52:05.035039 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.035451 kubelet[1855]: E0213 19:52:05.035060 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.035967 kubelet[1855]: E0213 19:52:05.035484 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.035967 kubelet[1855]: W0213 19:52:05.035498 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.035967 kubelet[1855]: E0213 19:52:05.035612 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.035967 kubelet[1855]: E0213 19:52:05.035911 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.035967 kubelet[1855]: W0213 19:52:05.035936 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.035967 kubelet[1855]: E0213 19:52:05.035953 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.037439 kubelet[1855]: E0213 19:52:05.036353 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.037439 kubelet[1855]: W0213 19:52:05.036391 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.037439 kubelet[1855]: E0213 19:52:05.036414 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.037439 kubelet[1855]: E0213 19:52:05.036814 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.037439 kubelet[1855]: W0213 19:52:05.036829 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.037439 kubelet[1855]: E0213 19:52:05.036847 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.037439 kubelet[1855]: E0213 19:52:05.037252 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.037439 kubelet[1855]: W0213 19:52:05.037266 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.037439 kubelet[1855]: E0213 19:52:05.037299 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.037933 kubelet[1855]: E0213 19:52:05.037668 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.037933 kubelet[1855]: W0213 19:52:05.037681 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.037933 kubelet[1855]: E0213 19:52:05.037697 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.038100 kubelet[1855]: E0213 19:52:05.038068 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.038165 kubelet[1855]: W0213 19:52:05.038099 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.038165 kubelet[1855]: E0213 19:52:05.038116 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.039647 kubelet[1855]: E0213 19:52:05.039624 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.039775 kubelet[1855]: W0213 19:52:05.039759 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.039878 kubelet[1855]: E0213 19:52:05.039864 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.040505 kubelet[1855]: E0213 19:52:05.040356 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.040505 kubelet[1855]: W0213 19:52:05.040374 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.040505 kubelet[1855]: E0213 19:52:05.040397 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.041172 kubelet[1855]: E0213 19:52:05.041123 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.041172 kubelet[1855]: W0213 19:52:05.041147 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.041684 kubelet[1855]: E0213 19:52:05.041378 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.041894 kubelet[1855]: E0213 19:52:05.041877 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.042435 kubelet[1855]: W0213 19:52:05.041985 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.042435 kubelet[1855]: E0213 19:52:05.042029 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.042696 kubelet[1855]: E0213 19:52:05.042679 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.042790 kubelet[1855]: W0213 19:52:05.042773 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.042912 kubelet[1855]: E0213 19:52:05.042897 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.043716 kubelet[1855]: E0213 19:52:05.043693 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.043975 kubelet[1855]: W0213 19:52:05.043833 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.043975 kubelet[1855]: E0213 19:52:05.043882 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.045086 kubelet[1855]: E0213 19:52:05.044943 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.045086 kubelet[1855]: W0213 19:52:05.044965 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.045777 kubelet[1855]: E0213 19:52:05.045461 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.046218 kubelet[1855]: E0213 19:52:05.046195 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.046499 kubelet[1855]: W0213 19:52:05.046310 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.046499 kubelet[1855]: E0213 19:52:05.046350 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.047756 kubelet[1855]: E0213 19:52:05.047352 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.047756 kubelet[1855]: W0213 19:52:05.047375 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.047756 kubelet[1855]: E0213 19:52:05.047468 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.048381 kubelet[1855]: E0213 19:52:05.048360 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.048981 kubelet[1855]: W0213 19:52:05.048507 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.048981 kubelet[1855]: E0213 19:52:05.048543 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.049569 kubelet[1855]: E0213 19:52:05.049545 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.049727 kubelet[1855]: W0213 19:52:05.049705 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.050149 kubelet[1855]: E0213 19:52:05.050031 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.050599 kubelet[1855]: E0213 19:52:05.050517 1855 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:52:05.050599 kubelet[1855]: W0213 19:52:05.050536 1855 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:52:05.050599 kubelet[1855]: E0213 19:52:05.050557 1855 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:52:05.121505 containerd[1482]: time="2025-02-13T19:52:05.121434589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:05.122800 containerd[1482]: time="2025-02-13T19:52:05.122724355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:52:05.124388 containerd[1482]: time="2025-02-13T19:52:05.124317028Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:05.127354 containerd[1482]: time="2025-02-13T19:52:05.127245082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:05.128405 containerd[1482]: time="2025-02-13T19:52:05.128136568Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.174795267s" Feb 13 19:52:05.128405 containerd[1482]: time="2025-02-13T19:52:05.128201656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:52:05.132659 containerd[1482]: time="2025-02-13T19:52:05.132592120Z" level=info msg="CreateContainer within sandbox \"d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:52:05.157862 containerd[1482]: time="2025-02-13T19:52:05.157793099Z" level=info msg="CreateContainer within sandbox \"d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1\"" Feb 13 19:52:05.158816 containerd[1482]: time="2025-02-13T19:52:05.158623484Z" level=info msg="StartContainer for \"b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1\"" Feb 13 19:52:05.205562 systemd[1]: Started cri-containerd-b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1.scope - libcontainer container b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1. Feb 13 19:52:05.248123 containerd[1482]: time="2025-02-13T19:52:05.248014319Z" level=info msg="StartContainer for \"b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1\" returns successfully" Feb 13 19:52:05.263891 systemd[1]: cri-containerd-b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1.scope: Deactivated successfully. Feb 13 19:52:05.299862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1-rootfs.mount: Deactivated successfully. Feb 13 19:52:05.672121 containerd[1482]: time="2025-02-13T19:52:05.671848976Z" level=info msg="shim disconnected" id=b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1 namespace=k8s.io Feb 13 19:52:05.672121 containerd[1482]: time="2025-02-13T19:52:05.671926805Z" level=warning msg="cleaning up after shim disconnected" id=b0d64ca351f0ec559c6a8dda53fbac8b7d6891daee628e673690002d151b33e1 namespace=k8s.io Feb 13 19:52:05.672121 containerd[1482]: time="2025-02-13T19:52:05.671942433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:05.764697 kubelet[1855]: E0213 19:52:05.764637 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:05.957644 containerd[1482]: time="2025-02-13T19:52:05.957497604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:52:06.765818 kubelet[1855]: E0213 19:52:06.765747 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:06.917062 kubelet[1855]: E0213 19:52:06.917005 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:07.766326 kubelet[1855]: E0213 19:52:07.766247 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:08.767482 kubelet[1855]: E0213 19:52:08.767359 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:08.917357 kubelet[1855]: E0213 19:52:08.916685 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:09.755150 containerd[1482]: time="2025-02-13T19:52:09.755080101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:09.756627 containerd[1482]: time="2025-02-13T19:52:09.756538424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:52:09.758313 containerd[1482]: time="2025-02-13T19:52:09.758217840Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:09.761802 containerd[1482]: time="2025-02-13T19:52:09.761696896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:09.766313 containerd[1482]: time="2025-02-13T19:52:09.764896082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.806937536s" Feb 13 19:52:09.766313 containerd[1482]: time="2025-02-13T19:52:09.764951189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:52:09.768205 kubelet[1855]: E0213 19:52:09.768169 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:09.773310 containerd[1482]: time="2025-02-13T19:52:09.773249619Z" level=info msg="CreateContainer within sandbox \"d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:52:09.794468 containerd[1482]: time="2025-02-13T19:52:09.794400290Z" level=info msg="CreateContainer within sandbox \"d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4\"" Feb 13 19:52:09.795443 containerd[1482]: time="2025-02-13T19:52:09.795163118Z" level=info msg="StartContainer for \"eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4\"" Feb 13 19:52:09.842497 systemd[1]: Started cri-containerd-eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4.scope - libcontainer container eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4. Feb 13 19:52:09.884975 containerd[1482]: time="2025-02-13T19:52:09.884812635Z" level=info msg="StartContainer for \"eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4\" returns successfully" Feb 13 19:52:10.733421 containerd[1482]: time="2025-02-13T19:52:10.733087354Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:52:10.735402 systemd[1]: cri-containerd-eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4.scope: Deactivated successfully. Feb 13 19:52:10.749648 kubelet[1855]: I0213 19:52:10.749447 1855 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:52:10.768355 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4-rootfs.mount: Deactivated successfully. Feb 13 19:52:10.771559 kubelet[1855]: E0213 19:52:10.771369 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:10.926063 systemd[1]: Created slice kubepods-besteffort-pod482935c4_4939_47ca_9a60_130d52de95d3.slice - libcontainer container kubepods-besteffort-pod482935c4_4939_47ca_9a60_130d52de95d3.slice. Feb 13 19:52:10.976010 containerd[1482]: time="2025-02-13T19:52:10.975023918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:0,}" Feb 13 19:52:11.536324 containerd[1482]: time="2025-02-13T19:52:11.536170132Z" level=info msg="shim disconnected" id=eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4 namespace=k8s.io Feb 13 19:52:11.536324 containerd[1482]: time="2025-02-13T19:52:11.536321983Z" level=warning msg="cleaning up after shim disconnected" id=eff881986363360ca585adc695d2f641eaa4d8e3882b262f076792ad2c68a0f4 namespace=k8s.io Feb 13 19:52:11.536591 containerd[1482]: time="2025-02-13T19:52:11.536340503Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:52:11.574708 containerd[1482]: time="2025-02-13T19:52:11.574625331Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:52:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:52:11.625697 containerd[1482]: time="2025-02-13T19:52:11.625632206Z" level=error msg="Failed to destroy network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:11.628727 containerd[1482]: time="2025-02-13T19:52:11.628657272Z" level=error msg="encountered an error cleaning up failed sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:11.628908 containerd[1482]: time="2025-02-13T19:52:11.628799684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:11.629160 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e-shm.mount: Deactivated successfully. Feb 13 19:52:11.630349 kubelet[1855]: E0213 19:52:11.629822 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:11.630349 kubelet[1855]: E0213 19:52:11.630023 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:11.630349 kubelet[1855]: E0213 19:52:11.630059 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:11.630623 kubelet[1855]: E0213 19:52:11.630134 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:11.772352 kubelet[1855]: E0213 19:52:11.772262 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:11.985835 containerd[1482]: time="2025-02-13T19:52:11.985320774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:52:11.986512 kubelet[1855]: I0213 19:52:11.985488 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e" Feb 13 19:52:11.986595 containerd[1482]: time="2025-02-13T19:52:11.986308518Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:11.986595 containerd[1482]: time="2025-02-13T19:52:11.986522886Z" level=info msg="Ensure that sandbox 38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e in task-service has been cleanup successfully" Feb 13 19:52:11.987229 containerd[1482]: time="2025-02-13T19:52:11.986945858Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:11.989075 containerd[1482]: time="2025-02-13T19:52:11.986972694Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:11.989956 systemd[1]: run-netns-cni\x2db31edd1d\x2ddc32\x2dbee9\x2dab6e\x2d3895bd4097d9.mount: Deactivated successfully. Feb 13 19:52:11.992777 containerd[1482]: time="2025-02-13T19:52:11.990900106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:1,}" Feb 13 19:52:12.076298 containerd[1482]: time="2025-02-13T19:52:12.076211119Z" level=error msg="Failed to destroy network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:12.076741 containerd[1482]: time="2025-02-13T19:52:12.076676902Z" level=error msg="encountered an error cleaning up failed sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:12.076856 containerd[1482]: time="2025-02-13T19:52:12.076772127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:12.077182 kubelet[1855]: E0213 19:52:12.077107 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:12.077346 kubelet[1855]: E0213 19:52:12.077198 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:12.077346 kubelet[1855]: E0213 19:52:12.077228 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:12.077618 kubelet[1855]: E0213 19:52:12.077383 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:12.092922 kubelet[1855]: I0213 19:52:12.092763 1855 topology_manager.go:215] "Topology Admit Handler" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" podNamespace="default" podName="nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:12.101399 systemd[1]: Created slice kubepods-besteffort-pod8a69c858_fead_44ad_aa24_1c0fd99da2c3.slice - libcontainer container kubepods-besteffort-pod8a69c858_fead_44ad_aa24_1c0fd99da2c3.slice. Feb 13 19:52:12.182798 kubelet[1855]: I0213 19:52:12.182678 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5lsb\" (UniqueName: \"kubernetes.io/projected/8a69c858-fead-44ad-aa24-1c0fd99da2c3-kube-api-access-g5lsb\") pod \"nginx-deployment-85f456d6dd-ksk75\" (UID: \"8a69c858-fead-44ad-aa24-1c0fd99da2c3\") " pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:12.405211 containerd[1482]: time="2025-02-13T19:52:12.405041883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:0,}" Feb 13 19:52:12.485902 containerd[1482]: time="2025-02-13T19:52:12.485810807Z" level=error msg="Failed to destroy network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:12.486465 containerd[1482]: time="2025-02-13T19:52:12.486315007Z" level=error msg="encountered an error cleaning up failed sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:12.486465 containerd[1482]: time="2025-02-13T19:52:12.486402354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:12.487041 kubelet[1855]: E0213 19:52:12.486986 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:12.487181 kubelet[1855]: E0213 19:52:12.487131 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:12.487181 kubelet[1855]: E0213 19:52:12.487165 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:12.487407 kubelet[1855]: E0213 19:52:12.487246 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-ksk75" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" Feb 13 19:52:12.547453 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872-shm.mount: Deactivated successfully. Feb 13 19:52:12.773500 kubelet[1855]: E0213 19:52:12.773428 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:12.990616 kubelet[1855]: I0213 19:52:12.990505 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8" Feb 13 19:52:12.991472 containerd[1482]: time="2025-02-13T19:52:12.991390679Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:12.993033 containerd[1482]: time="2025-02-13T19:52:12.991691515Z" level=info msg="Ensure that sandbox b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8 in task-service has been cleanup successfully" Feb 13 19:52:12.996817 containerd[1482]: time="2025-02-13T19:52:12.996747293Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:12.996817 containerd[1482]: time="2025-02-13T19:52:12.996794070Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:12.998317 containerd[1482]: time="2025-02-13T19:52:12.997731161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:1,}" Feb 13 19:52:12.999155 systemd[1]: run-netns-cni\x2daa81c61b\x2ddf4c\x2d7d9b\x2d8a8a\x2df44b88109436.mount: Deactivated successfully. Feb 13 19:52:13.010090 kubelet[1855]: I0213 19:52:13.009129 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872" Feb 13 19:52:13.012840 containerd[1482]: time="2025-02-13T19:52:13.012303015Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:13.012840 containerd[1482]: time="2025-02-13T19:52:13.012629807Z" level=info msg="Ensure that sandbox ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872 in task-service has been cleanup successfully" Feb 13 19:52:13.013434 containerd[1482]: time="2025-02-13T19:52:13.013234996Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:13.013434 containerd[1482]: time="2025-02-13T19:52:13.013299277Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:13.019744 containerd[1482]: time="2025-02-13T19:52:13.019379271Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:13.019744 containerd[1482]: time="2025-02-13T19:52:13.019532404Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:13.019744 containerd[1482]: time="2025-02-13T19:52:13.019551944Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:13.019768 systemd[1]: run-netns-cni\x2dd6a86812\x2d6724\x2dbbd6\x2d17d4\x2d5e5f15ea58fc.mount: Deactivated successfully. Feb 13 19:52:13.021438 containerd[1482]: time="2025-02-13T19:52:13.020868363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:2,}" Feb 13 19:52:13.199700 containerd[1482]: time="2025-02-13T19:52:13.198187117Z" level=error msg="Failed to destroy network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:13.199700 containerd[1482]: time="2025-02-13T19:52:13.198656766Z" level=error msg="encountered an error cleaning up failed sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:13.199700 containerd[1482]: time="2025-02-13T19:52:13.198740235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:13.200669 kubelet[1855]: E0213 19:52:13.200149 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:13.200669 kubelet[1855]: E0213 19:52:13.200223 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:13.200669 kubelet[1855]: E0213 19:52:13.200258 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:13.200944 kubelet[1855]: E0213 19:52:13.200335 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:13.202376 containerd[1482]: time="2025-02-13T19:52:13.202330568Z" level=error msg="Failed to destroy network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:13.203151 containerd[1482]: time="2025-02-13T19:52:13.203110438Z" level=error msg="encountered an error cleaning up failed sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:13.203373 containerd[1482]: time="2025-02-13T19:52:13.203343194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:13.204176 kubelet[1855]: E0213 19:52:13.203899 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:13.204176 kubelet[1855]: E0213 19:52:13.203964 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:13.204176 kubelet[1855]: E0213 19:52:13.204009 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:13.204496 kubelet[1855]: E0213 19:52:13.204072 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-ksk75" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" Feb 13 19:52:13.546869 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be-shm.mount: Deactivated successfully. Feb 13 19:52:13.774622 kubelet[1855]: E0213 19:52:13.774561 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:14.014229 kubelet[1855]: I0213 19:52:14.013414 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be" Feb 13 19:52:14.017320 containerd[1482]: time="2025-02-13T19:52:14.014443394Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:14.017320 containerd[1482]: time="2025-02-13T19:52:14.014730236Z" level=info msg="Ensure that sandbox 92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be in task-service has been cleanup successfully" Feb 13 19:52:14.018391 kubelet[1855]: I0213 19:52:14.017977 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5" Feb 13 19:52:14.018776 containerd[1482]: time="2025-02-13T19:52:14.018248636Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:14.018776 containerd[1482]: time="2025-02-13T19:52:14.018329332Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:14.019660 containerd[1482]: time="2025-02-13T19:52:14.019387267Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:14.019899 systemd[1]: run-netns-cni\x2d386dc27c\x2d2337\x2da0be\x2d3e0e\x2d21df60305022.mount: Deactivated successfully. Feb 13 19:52:14.020708 containerd[1482]: time="2025-02-13T19:52:14.019936231Z" level=info msg="Ensure that sandbox a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5 in task-service has been cleanup successfully" Feb 13 19:52:14.021741 containerd[1482]: time="2025-02-13T19:52:14.021126288Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:14.021741 containerd[1482]: time="2025-02-13T19:52:14.021158854Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:14.021741 containerd[1482]: time="2025-02-13T19:52:14.019415365Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:14.021741 containerd[1482]: time="2025-02-13T19:52:14.021433572Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:14.021741 containerd[1482]: time="2025-02-13T19:52:14.021454982Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:14.022723 containerd[1482]: time="2025-02-13T19:52:14.022294662Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:14.022723 containerd[1482]: time="2025-02-13T19:52:14.022415927Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:14.022723 containerd[1482]: time="2025-02-13T19:52:14.022434206Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:14.022723 containerd[1482]: time="2025-02-13T19:52:14.022436788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:2,}" Feb 13 19:52:14.025190 systemd[1]: run-netns-cni\x2d0b2c0343\x2df39f\x2da53a\x2dd814\x2d55fa1933d5ba.mount: Deactivated successfully. Feb 13 19:52:14.027120 containerd[1482]: time="2025-02-13T19:52:14.026611027Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:14.027692 containerd[1482]: time="2025-02-13T19:52:14.027425890Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:14.027692 containerd[1482]: time="2025-02-13T19:52:14.027537493Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:14.028736 containerd[1482]: time="2025-02-13T19:52:14.028677951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:3,}" Feb 13 19:52:14.498602 containerd[1482]: time="2025-02-13T19:52:14.498390844Z" level=error msg="Failed to destroy network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:14.499167 containerd[1482]: time="2025-02-13T19:52:14.498897747Z" level=error msg="encountered an error cleaning up failed sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:14.499167 containerd[1482]: time="2025-02-13T19:52:14.498988185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:14.500082 kubelet[1855]: E0213 19:52:14.499595 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:14.500082 kubelet[1855]: E0213 19:52:14.499676 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:14.500082 kubelet[1855]: E0213 19:52:14.499710 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:14.500385 kubelet[1855]: E0213 19:52:14.499785 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:14.513741 containerd[1482]: time="2025-02-13T19:52:14.513011607Z" level=error msg="Failed to destroy network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:14.513741 containerd[1482]: time="2025-02-13T19:52:14.513498281Z" level=error msg="encountered an error cleaning up failed sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:14.513741 containerd[1482]: time="2025-02-13T19:52:14.513592910Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:14.514809 kubelet[1855]: E0213 19:52:14.514262 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:14.514809 kubelet[1855]: E0213 19:52:14.514394 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:14.514809 kubelet[1855]: E0213 19:52:14.514434 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:14.515090 kubelet[1855]: E0213 19:52:14.514500 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-ksk75" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" Feb 13 19:52:14.548755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506-shm.mount: Deactivated successfully. Feb 13 19:52:14.774905 kubelet[1855]: E0213 19:52:14.774748 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:14.792868 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:52:15.025970 kubelet[1855]: I0213 19:52:15.024699 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6" Feb 13 19:52:15.026125 containerd[1482]: time="2025-02-13T19:52:15.025557101Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:15.026125 containerd[1482]: time="2025-02-13T19:52:15.025880180Z" level=info msg="Ensure that sandbox 4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6 in task-service has been cleanup successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.027055549Z" level=info msg="TearDown network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.027082826Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" returns successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.027417146Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.027548923Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.027565799Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.028619660Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.028779744Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.028803030Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.028886057Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.029108152Z" level=info msg="Ensure that sandbox 2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506 in task-service has been cleanup successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.029381759Z" level=info msg="TearDown network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.029403752Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" returns successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.029796351Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.029944150Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.029967147Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:15.030786 containerd[1482]: time="2025-02-13T19:52:15.030697014Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:15.031563 kubelet[1855]: I0213 19:52:15.027879 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506" Feb 13 19:52:15.031645 containerd[1482]: time="2025-02-13T19:52:15.030809104Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:15.031645 containerd[1482]: time="2025-02-13T19:52:15.030827314Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:15.031645 containerd[1482]: time="2025-02-13T19:52:15.031453480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:4,}" Feb 13 19:52:15.034028 systemd[1]: run-netns-cni\x2d5ac31876\x2d231b\x2da42f\x2dc3b6\x2d204ab016a3da.mount: Deactivated successfully. Feb 13 19:52:15.037714 containerd[1482]: time="2025-02-13T19:52:15.034910360Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:15.037714 containerd[1482]: time="2025-02-13T19:52:15.035037936Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:15.037714 containerd[1482]: time="2025-02-13T19:52:15.035057968Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:15.041564 systemd[1]: run-netns-cni\x2da95a2d63\x2d1b73\x2d644c\x2d98b8\x2d56013afa1deb.mount: Deactivated successfully. Feb 13 19:52:15.042092 containerd[1482]: time="2025-02-13T19:52:15.041798607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:3,}" Feb 13 19:52:15.260194 containerd[1482]: time="2025-02-13T19:52:15.259873347Z" level=error msg="Failed to destroy network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:15.260194 containerd[1482]: time="2025-02-13T19:52:15.260040230Z" level=error msg="Failed to destroy network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:15.260610 containerd[1482]: time="2025-02-13T19:52:15.260348090Z" level=error msg="encountered an error cleaning up failed sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:15.260610 containerd[1482]: time="2025-02-13T19:52:15.260463942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:15.261001 kubelet[1855]: E0213 19:52:15.260721 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:15.261001 kubelet[1855]: E0213 19:52:15.260801 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:15.261001 kubelet[1855]: E0213 19:52:15.260835 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:15.261199 kubelet[1855]: E0213 19:52:15.260893 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:15.262657 containerd[1482]: time="2025-02-13T19:52:15.262497983Z" level=error msg="encountered an error cleaning up failed sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:15.262657 containerd[1482]: time="2025-02-13T19:52:15.262598167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:15.262872 kubelet[1855]: E0213 19:52:15.262832 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:15.262940 kubelet[1855]: E0213 19:52:15.262901 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:15.263008 kubelet[1855]: E0213 19:52:15.262934 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:15.263059 kubelet[1855]: E0213 19:52:15.262989 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-ksk75" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" Feb 13 19:52:15.550234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d-shm.mount: Deactivated successfully. Feb 13 19:52:15.775658 kubelet[1855]: E0213 19:52:15.775602 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:16.035903 kubelet[1855]: I0213 19:52:16.035618 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d" Feb 13 19:52:16.036806 containerd[1482]: time="2025-02-13T19:52:16.036766391Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" Feb 13 19:52:16.038431 containerd[1482]: time="2025-02-13T19:52:16.038139719Z" level=info msg="Ensure that sandbox dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d in task-service has been cleanup successfully" Feb 13 19:52:16.038977 containerd[1482]: time="2025-02-13T19:52:16.038847262Z" level=info msg="TearDown network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" successfully" Feb 13 19:52:16.038977 containerd[1482]: time="2025-02-13T19:52:16.038883110Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" returns successfully" Feb 13 19:52:16.040524 containerd[1482]: time="2025-02-13T19:52:16.040465070Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:16.040657 containerd[1482]: time="2025-02-13T19:52:16.040601657Z" level=info msg="TearDown network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" successfully" Feb 13 19:52:16.040657 containerd[1482]: time="2025-02-13T19:52:16.040621639Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" returns successfully" Feb 13 19:52:16.043022 containerd[1482]: time="2025-02-13T19:52:16.042958664Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:16.044704 containerd[1482]: time="2025-02-13T19:52:16.043151716Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:16.044704 containerd[1482]: time="2025-02-13T19:52:16.043173760Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:16.044704 containerd[1482]: time="2025-02-13T19:52:16.044151340Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:16.044704 containerd[1482]: time="2025-02-13T19:52:16.044269564Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:16.044704 containerd[1482]: time="2025-02-13T19:52:16.044326587Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:16.045829 systemd[1]: run-netns-cni\x2d6e1c32bb\x2d54cf\x2d4569\x2d09d5\x2d7ee1241b6c07.mount: Deactivated successfully. Feb 13 19:52:16.048171 containerd[1482]: time="2025-02-13T19:52:16.047124671Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:16.048171 containerd[1482]: time="2025-02-13T19:52:16.047254335Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:16.048171 containerd[1482]: time="2025-02-13T19:52:16.047297497Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:16.049015 kubelet[1855]: I0213 19:52:16.047786 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885" Feb 13 19:52:16.049259 containerd[1482]: time="2025-02-13T19:52:16.048694931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:5,}" Feb 13 19:52:16.050799 containerd[1482]: time="2025-02-13T19:52:16.050309015Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" Feb 13 19:52:16.050799 containerd[1482]: time="2025-02-13T19:52:16.050593348Z" level=info msg="Ensure that sandbox 3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885 in task-service has been cleanup successfully" Feb 13 19:52:16.053302 containerd[1482]: time="2025-02-13T19:52:16.050995031Z" level=info msg="TearDown network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" successfully" Feb 13 19:52:16.053480 containerd[1482]: time="2025-02-13T19:52:16.053446217Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" returns successfully" Feb 13 19:52:16.054056 containerd[1482]: time="2025-02-13T19:52:16.054024183Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:16.054424 containerd[1482]: time="2025-02-13T19:52:16.054394670Z" level=info msg="TearDown network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" successfully" Feb 13 19:52:16.054992 containerd[1482]: time="2025-02-13T19:52:16.054951404Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" returns successfully" Feb 13 19:52:16.055883 systemd[1]: run-netns-cni\x2d9c46ec31\x2d78b8\x2dc040\x2d791a\x2d10929126ea6f.mount: Deactivated successfully. Feb 13 19:52:16.060633 containerd[1482]: time="2025-02-13T19:52:16.060562894Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:16.061004 containerd[1482]: time="2025-02-13T19:52:16.060896144Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:16.061004 containerd[1482]: time="2025-02-13T19:52:16.060923096Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:16.064726 containerd[1482]: time="2025-02-13T19:52:16.064462331Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:16.064726 containerd[1482]: time="2025-02-13T19:52:16.064599875Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:16.064973 containerd[1482]: time="2025-02-13T19:52:16.064912659Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:16.066016 containerd[1482]: time="2025-02-13T19:52:16.065983000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:4,}" Feb 13 19:52:16.223529 containerd[1482]: time="2025-02-13T19:52:16.223256687Z" level=error msg="Failed to destroy network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:16.224313 containerd[1482]: time="2025-02-13T19:52:16.224129506Z" level=error msg="encountered an error cleaning up failed sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:16.224313 containerd[1482]: time="2025-02-13T19:52:16.224230162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:16.225341 kubelet[1855]: E0213 19:52:16.224843 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:16.225341 kubelet[1855]: E0213 19:52:16.224921 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:16.225341 kubelet[1855]: E0213 19:52:16.224955 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:16.225596 kubelet[1855]: E0213 19:52:16.225039 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:16.262027 containerd[1482]: time="2025-02-13T19:52:16.261342848Z" level=error msg="Failed to destroy network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:16.262027 containerd[1482]: time="2025-02-13T19:52:16.261783630Z" level=error msg="encountered an error cleaning up failed sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:16.262027 containerd[1482]: time="2025-02-13T19:52:16.261875113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:16.263080 kubelet[1855]: E0213 19:52:16.262582 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:16.263080 kubelet[1855]: E0213 19:52:16.262676 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:16.263080 kubelet[1855]: E0213 19:52:16.262708 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:16.263339 kubelet[1855]: E0213 19:52:16.262769 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-ksk75" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" Feb 13 19:52:16.547241 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623-shm.mount: Deactivated successfully. Feb 13 19:52:16.776879 kubelet[1855]: E0213 19:52:16.776759 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:17.060172 kubelet[1855]: I0213 19:52:17.060129 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623" Feb 13 19:52:17.061613 containerd[1482]: time="2025-02-13T19:52:17.060861776Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\"" Feb 13 19:52:17.061613 containerd[1482]: time="2025-02-13T19:52:17.061171534Z" level=info msg="Ensure that sandbox ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623 in task-service has been cleanup successfully" Feb 13 19:52:17.064768 containerd[1482]: time="2025-02-13T19:52:17.064687151Z" level=info msg="TearDown network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" successfully" Feb 13 19:52:17.064768 containerd[1482]: time="2025-02-13T19:52:17.064764820Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" returns successfully" Feb 13 19:52:17.065851 systemd[1]: run-netns-cni\x2d03ad6147\x2d7627\x2ddcd4\x2db764\x2de8225d7f2ff1.mount: Deactivated successfully. Feb 13 19:52:17.068849 containerd[1482]: time="2025-02-13T19:52:17.066488292Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" Feb 13 19:52:17.068849 containerd[1482]: time="2025-02-13T19:52:17.066623084Z" level=info msg="TearDown network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" successfully" Feb 13 19:52:17.068849 containerd[1482]: time="2025-02-13T19:52:17.066687347Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" returns successfully" Feb 13 19:52:17.069889 containerd[1482]: time="2025-02-13T19:52:17.069855443Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:17.070496 containerd[1482]: time="2025-02-13T19:52:17.070137381Z" level=info msg="TearDown network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" successfully" Feb 13 19:52:17.070496 containerd[1482]: time="2025-02-13T19:52:17.070162507Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" returns successfully" Feb 13 19:52:17.071255 containerd[1482]: time="2025-02-13T19:52:17.070921080Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:17.071255 containerd[1482]: time="2025-02-13T19:52:17.071034948Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:17.071255 containerd[1482]: time="2025-02-13T19:52:17.071055986Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:17.072928 containerd[1482]: time="2025-02-13T19:52:17.072515476Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:17.072928 containerd[1482]: time="2025-02-13T19:52:17.072628642Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:17.072928 containerd[1482]: time="2025-02-13T19:52:17.072646761Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:17.074032 containerd[1482]: time="2025-02-13T19:52:17.073132495Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:17.074032 containerd[1482]: time="2025-02-13T19:52:17.073882814Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:17.074032 containerd[1482]: time="2025-02-13T19:52:17.073917052Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:17.075163 kubelet[1855]: I0213 19:52:17.074380 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1" Feb 13 19:52:17.075877 containerd[1482]: time="2025-02-13T19:52:17.075824758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:6,}" Feb 13 19:52:17.076090 containerd[1482]: time="2025-02-13T19:52:17.076060540Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\"" Feb 13 19:52:17.076431 containerd[1482]: time="2025-02-13T19:52:17.076399625Z" level=info msg="Ensure that sandbox f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1 in task-service has been cleanup successfully" Feb 13 19:52:17.080787 containerd[1482]: time="2025-02-13T19:52:17.080740858Z" level=info msg="TearDown network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" successfully" Feb 13 19:52:17.080787 containerd[1482]: time="2025-02-13T19:52:17.080782850Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" returns successfully" Feb 13 19:52:17.082057 systemd[1]: run-netns-cni\x2d1277bf7e\x2d75e0\x2dc071\x2d6e78\x2da2f93a5d8349.mount: Deactivated successfully. Feb 13 19:52:17.085116 containerd[1482]: time="2025-02-13T19:52:17.084326536Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" Feb 13 19:52:17.086270 containerd[1482]: time="2025-02-13T19:52:17.086235390Z" level=info msg="TearDown network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" successfully" Feb 13 19:52:17.086522 containerd[1482]: time="2025-02-13T19:52:17.086475476Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" returns successfully" Feb 13 19:52:17.088302 containerd[1482]: time="2025-02-13T19:52:17.088194789Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:17.088997 containerd[1482]: time="2025-02-13T19:52:17.088904137Z" level=info msg="TearDown network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" successfully" Feb 13 19:52:17.088997 containerd[1482]: time="2025-02-13T19:52:17.088936003Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" returns successfully" Feb 13 19:52:17.090804 containerd[1482]: time="2025-02-13T19:52:17.090082382Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:17.090804 containerd[1482]: time="2025-02-13T19:52:17.090420890Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:17.090804 containerd[1482]: time="2025-02-13T19:52:17.090443370Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:17.091778 containerd[1482]: time="2025-02-13T19:52:17.091673207Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:17.091907 containerd[1482]: time="2025-02-13T19:52:17.091794297Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:17.091907 containerd[1482]: time="2025-02-13T19:52:17.091812523Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:17.094402 containerd[1482]: time="2025-02-13T19:52:17.093803956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:5,}" Feb 13 19:52:17.256336 containerd[1482]: time="2025-02-13T19:52:17.256234558Z" level=error msg="Failed to destroy network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:17.256730 containerd[1482]: time="2025-02-13T19:52:17.256687962Z" level=error msg="encountered an error cleaning up failed sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:17.256839 containerd[1482]: time="2025-02-13T19:52:17.256787511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:17.257679 kubelet[1855]: E0213 19:52:17.257253 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:17.257679 kubelet[1855]: E0213 19:52:17.257520 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:17.257679 kubelet[1855]: E0213 19:52:17.257582 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:17.258564 kubelet[1855]: E0213 19:52:17.257845 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:17.314347 containerd[1482]: time="2025-02-13T19:52:17.312054809Z" level=error msg="Failed to destroy network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:17.314347 containerd[1482]: time="2025-02-13T19:52:17.312505781Z" level=error msg="encountered an error cleaning up failed sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:17.314347 containerd[1482]: time="2025-02-13T19:52:17.312603866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:17.314709 kubelet[1855]: E0213 19:52:17.312887 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:17.314709 kubelet[1855]: E0213 19:52:17.312955 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:17.314709 kubelet[1855]: E0213 19:52:17.312988 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:17.314886 kubelet[1855]: E0213 19:52:17.313046 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-ksk75" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" Feb 13 19:52:17.546822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3-shm.mount: Deactivated successfully. Feb 13 19:52:17.547338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2-shm.mount: Deactivated successfully. Feb 13 19:52:17.777442 kubelet[1855]: E0213 19:52:17.777389 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:18.082832 kubelet[1855]: I0213 19:52:18.082496 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2" Feb 13 19:52:18.084002 containerd[1482]: time="2025-02-13T19:52:18.083419135Z" level=info msg="StopPodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\"" Feb 13 19:52:18.084002 containerd[1482]: time="2025-02-13T19:52:18.083719447Z" level=info msg="Ensure that sandbox a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2 in task-service has been cleanup successfully" Feb 13 19:52:18.086857 containerd[1482]: time="2025-02-13T19:52:18.086815950Z" level=info msg="TearDown network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" successfully" Feb 13 19:52:18.086857 containerd[1482]: time="2025-02-13T19:52:18.086855294Z" level=info msg="StopPodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" returns successfully" Feb 13 19:52:18.088626 systemd[1]: run-netns-cni\x2d6cafa1fb\x2d4d7c\x2d6004\x2d5374\x2d2964f836956c.mount: Deactivated successfully. Feb 13 19:52:18.090402 containerd[1482]: time="2025-02-13T19:52:18.089707571Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\"" Feb 13 19:52:18.090402 containerd[1482]: time="2025-02-13T19:52:18.089844064Z" level=info msg="TearDown network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" successfully" Feb 13 19:52:18.090402 containerd[1482]: time="2025-02-13T19:52:18.089865117Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" returns successfully" Feb 13 19:52:18.092942 containerd[1482]: time="2025-02-13T19:52:18.092898634Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" Feb 13 19:52:18.093068 containerd[1482]: time="2025-02-13T19:52:18.093039652Z" level=info msg="TearDown network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" successfully" Feb 13 19:52:18.093068 containerd[1482]: time="2025-02-13T19:52:18.093059649Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" returns successfully" Feb 13 19:52:18.094188 containerd[1482]: time="2025-02-13T19:52:18.094152869Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:18.094520 containerd[1482]: time="2025-02-13T19:52:18.094491181Z" level=info msg="TearDown network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" successfully" Feb 13 19:52:18.094744 containerd[1482]: time="2025-02-13T19:52:18.094720244Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" returns successfully" Feb 13 19:52:18.095540 containerd[1482]: time="2025-02-13T19:52:18.095504709Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:18.095672 containerd[1482]: time="2025-02-13T19:52:18.095646997Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:18.095734 containerd[1482]: time="2025-02-13T19:52:18.095684902Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:18.096383 containerd[1482]: time="2025-02-13T19:52:18.096301138Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:18.096618 kubelet[1855]: I0213 19:52:18.096546 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3" Feb 13 19:52:18.097440 containerd[1482]: time="2025-02-13T19:52:18.097403079Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:18.097440 containerd[1482]: time="2025-02-13T19:52:18.097437690Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:18.097859 containerd[1482]: time="2025-02-13T19:52:18.097646928Z" level=info msg="StopPodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\"" Feb 13 19:52:18.097941 containerd[1482]: time="2025-02-13T19:52:18.097922209Z" level=info msg="Ensure that sandbox 5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3 in task-service has been cleanup successfully" Feb 13 19:52:18.099458 containerd[1482]: time="2025-02-13T19:52:18.099176002Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:18.099458 containerd[1482]: time="2025-02-13T19:52:18.099330002Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:18.099458 containerd[1482]: time="2025-02-13T19:52:18.099349630Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:18.103910 containerd[1482]: time="2025-02-13T19:52:18.103864137Z" level=info msg="TearDown network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" successfully" Feb 13 19:52:18.103910 containerd[1482]: time="2025-02-13T19:52:18.103906169Z" level=info msg="StopPodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" returns successfully" Feb 13 19:52:18.104589 containerd[1482]: time="2025-02-13T19:52:18.104462537Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\"" Feb 13 19:52:18.104970 containerd[1482]: time="2025-02-13T19:52:18.104599808Z" level=info msg="TearDown network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" successfully" Feb 13 19:52:18.104970 containerd[1482]: time="2025-02-13T19:52:18.104617796Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" returns successfully" Feb 13 19:52:18.104970 containerd[1482]: time="2025-02-13T19:52:18.104859994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:7,}" Feb 13 19:52:18.107267 systemd[1]: run-netns-cni\x2d05b17ba2\x2d78a5\x2d3b18\x2d7b6b\x2dd90cedd31927.mount: Deactivated successfully. Feb 13 19:52:18.114504 containerd[1482]: time="2025-02-13T19:52:18.113973892Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" Feb 13 19:52:18.114504 containerd[1482]: time="2025-02-13T19:52:18.114110277Z" level=info msg="TearDown network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" successfully" Feb 13 19:52:18.114504 containerd[1482]: time="2025-02-13T19:52:18.114130598Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" returns successfully" Feb 13 19:52:18.114952 containerd[1482]: time="2025-02-13T19:52:18.114921891Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:18.115175 containerd[1482]: time="2025-02-13T19:52:18.115152721Z" level=info msg="TearDown network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" successfully" Feb 13 19:52:18.115315 containerd[1482]: time="2025-02-13T19:52:18.115270918Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" returns successfully" Feb 13 19:52:18.116671 containerd[1482]: time="2025-02-13T19:52:18.116637393Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:18.116925 containerd[1482]: time="2025-02-13T19:52:18.116899053Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:18.117090 containerd[1482]: time="2025-02-13T19:52:18.117067149Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:18.123833 containerd[1482]: time="2025-02-13T19:52:18.123789882Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:18.124188 containerd[1482]: time="2025-02-13T19:52:18.124156987Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:18.124380 containerd[1482]: time="2025-02-13T19:52:18.124350401Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:18.129312 containerd[1482]: time="2025-02-13T19:52:18.128014445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:6,}" Feb 13 19:52:18.311610 containerd[1482]: time="2025-02-13T19:52:18.311409539Z" level=error msg="Failed to destroy network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:18.312870 containerd[1482]: time="2025-02-13T19:52:18.312667869Z" level=error msg="encountered an error cleaning up failed sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:18.312870 containerd[1482]: time="2025-02-13T19:52:18.312776441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:18.313233 kubelet[1855]: E0213 19:52:18.313134 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:18.313233 kubelet[1855]: E0213 19:52:18.313219 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:18.313417 kubelet[1855]: E0213 19:52:18.313251 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:18.314351 kubelet[1855]: E0213 19:52:18.313507 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:18.326726 containerd[1482]: time="2025-02-13T19:52:18.326547326Z" level=error msg="Failed to destroy network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:18.327254 containerd[1482]: time="2025-02-13T19:52:18.327197886Z" level=error msg="encountered an error cleaning up failed sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:18.327671 containerd[1482]: time="2025-02-13T19:52:18.327370817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:18.327795 kubelet[1855]: E0213 19:52:18.327648 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:18.327795 kubelet[1855]: E0213 19:52:18.327720 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:18.327795 kubelet[1855]: E0213 19:52:18.327751 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:18.327960 kubelet[1855]: E0213 19:52:18.327813 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-ksk75" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" Feb 13 19:52:18.547857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d-shm.mount: Deactivated successfully. Feb 13 19:52:18.760318 kubelet[1855]: E0213 19:52:18.760240 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:18.779227 kubelet[1855]: E0213 19:52:18.779159 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:19.106178 kubelet[1855]: I0213 19:52:19.104652 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58" Feb 13 19:52:19.106400 containerd[1482]: time="2025-02-13T19:52:19.105651999Z" level=info msg="StopPodSandbox for \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\"" Feb 13 19:52:19.106400 containerd[1482]: time="2025-02-13T19:52:19.105954821Z" level=info msg="Ensure that sandbox 79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58 in task-service has been cleanup successfully" Feb 13 19:52:19.112268 containerd[1482]: time="2025-02-13T19:52:19.112225495Z" level=info msg="TearDown network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\" successfully" Feb 13 19:52:19.113352 systemd[1]: run-netns-cni\x2dc47fc8e0\x2da6b4\x2d77c5\x2dcebc\x2d6d7c3223c487.mount: Deactivated successfully. Feb 13 19:52:19.117825 containerd[1482]: time="2025-02-13T19:52:19.116099627Z" level=info msg="StopPodSandbox for \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\" returns successfully" Feb 13 19:52:19.118865 containerd[1482]: time="2025-02-13T19:52:19.118525076Z" level=info msg="StopPodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\"" Feb 13 19:52:19.118865 containerd[1482]: time="2025-02-13T19:52:19.118661623Z" level=info msg="TearDown network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" successfully" Feb 13 19:52:19.118865 containerd[1482]: time="2025-02-13T19:52:19.118680396Z" level=info msg="StopPodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" returns successfully" Feb 13 19:52:19.119826 containerd[1482]: time="2025-02-13T19:52:19.119796745Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\"" Feb 13 19:52:19.120077 containerd[1482]: time="2025-02-13T19:52:19.120053909Z" level=info msg="TearDown network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" successfully" Feb 13 19:52:19.120263 containerd[1482]: time="2025-02-13T19:52:19.120238657Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" returns successfully" Feb 13 19:52:19.121854 containerd[1482]: time="2025-02-13T19:52:19.121617571Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" Feb 13 19:52:19.121854 containerd[1482]: time="2025-02-13T19:52:19.121741671Z" level=info msg="TearDown network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" successfully" Feb 13 19:52:19.121854 containerd[1482]: time="2025-02-13T19:52:19.121760229Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" returns successfully" Feb 13 19:52:19.122962 containerd[1482]: time="2025-02-13T19:52:19.122660612Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:19.122962 containerd[1482]: time="2025-02-13T19:52:19.122780994Z" level=info msg="TearDown network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" successfully" Feb 13 19:52:19.122962 containerd[1482]: time="2025-02-13T19:52:19.122798701Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" returns successfully" Feb 13 19:52:19.124172 containerd[1482]: time="2025-02-13T19:52:19.123689733Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:19.125319 containerd[1482]: time="2025-02-13T19:52:19.125026280Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:19.125548 containerd[1482]: time="2025-02-13T19:52:19.125485490Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:19.127307 kubelet[1855]: I0213 19:52:19.126445 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d" Feb 13 19:52:19.127425 containerd[1482]: time="2025-02-13T19:52:19.127327831Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:19.127529 containerd[1482]: time="2025-02-13T19:52:19.127500239Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:19.127610 containerd[1482]: time="2025-02-13T19:52:19.127530999Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:19.127973 containerd[1482]: time="2025-02-13T19:52:19.127947575Z" level=info msg="StopPodSandbox for \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\"" Feb 13 19:52:19.128928 containerd[1482]: time="2025-02-13T19:52:19.128898304Z" level=info msg="Ensure that sandbox 6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d in task-service has been cleanup successfully" Feb 13 19:52:19.133591 containerd[1482]: time="2025-02-13T19:52:19.133541617Z" level=info msg="TearDown network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\" successfully" Feb 13 19:52:19.136659 containerd[1482]: time="2025-02-13T19:52:19.134752253Z" level=info msg="StopPodSandbox for \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\" returns successfully" Feb 13 19:52:19.136355 systemd[1]: run-netns-cni\x2d4002ef53\x2dbb19\x2dc0b8\x2dca35\x2dac829c0f51bf.mount: Deactivated successfully. Feb 13 19:52:19.140511 containerd[1482]: time="2025-02-13T19:52:19.140460101Z" level=info msg="StopPodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\"" Feb 13 19:52:19.140803 containerd[1482]: time="2025-02-13T19:52:19.140777949Z" level=info msg="TearDown network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" successfully" Feb 13 19:52:19.140926 containerd[1482]: time="2025-02-13T19:52:19.140904186Z" level=info msg="StopPodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" returns successfully" Feb 13 19:52:19.142615 containerd[1482]: time="2025-02-13T19:52:19.142575154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:7,}" Feb 13 19:52:19.146692 containerd[1482]: time="2025-02-13T19:52:19.146643927Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\"" Feb 13 19:52:19.146834 containerd[1482]: time="2025-02-13T19:52:19.146807295Z" level=info msg="TearDown network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" successfully" Feb 13 19:52:19.146834 containerd[1482]: time="2025-02-13T19:52:19.146827486Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" returns successfully" Feb 13 19:52:19.149596 containerd[1482]: time="2025-02-13T19:52:19.149493704Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" Feb 13 19:52:19.149913 containerd[1482]: time="2025-02-13T19:52:19.149675019Z" level=info msg="TearDown network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" successfully" Feb 13 19:52:19.149913 containerd[1482]: time="2025-02-13T19:52:19.149709149Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" returns successfully" Feb 13 19:52:19.152454 containerd[1482]: time="2025-02-13T19:52:19.152260790Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:19.152984 containerd[1482]: time="2025-02-13T19:52:19.152481751Z" level=info msg="TearDown network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" successfully" Feb 13 19:52:19.152984 containerd[1482]: time="2025-02-13T19:52:19.152502038Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" returns successfully" Feb 13 19:52:19.153612 containerd[1482]: time="2025-02-13T19:52:19.153581972Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:19.154216 containerd[1482]: time="2025-02-13T19:52:19.154092426Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:19.154216 containerd[1482]: time="2025-02-13T19:52:19.154118344Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:19.161071 containerd[1482]: time="2025-02-13T19:52:19.160534869Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:19.161071 containerd[1482]: time="2025-02-13T19:52:19.160679773Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:19.161071 containerd[1482]: time="2025-02-13T19:52:19.160742634Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:19.163253 containerd[1482]: time="2025-02-13T19:52:19.163211130Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:19.163423 containerd[1482]: time="2025-02-13T19:52:19.163378084Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:19.163423 containerd[1482]: time="2025-02-13T19:52:19.163397732Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:19.164460 containerd[1482]: time="2025-02-13T19:52:19.164407765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:8,}" Feb 13 19:52:19.292926 containerd[1482]: time="2025-02-13T19:52:19.292761894Z" level=error msg="Failed to destroy network for sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:19.293790 containerd[1482]: time="2025-02-13T19:52:19.293591048Z" level=error msg="encountered an error cleaning up failed sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:19.293790 containerd[1482]: time="2025-02-13T19:52:19.293688784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:19.295248 kubelet[1855]: E0213 19:52:19.294211 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:19.295248 kubelet[1855]: E0213 19:52:19.294853 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:19.295248 kubelet[1855]: E0213 19:52:19.294891 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-ksk75" Feb 13 19:52:19.295561 kubelet[1855]: E0213 19:52:19.294958 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-ksk75_default(8a69c858-fead-44ad-aa24-1c0fd99da2c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-ksk75" podUID="8a69c858-fead-44ad-aa24-1c0fd99da2c3" Feb 13 19:52:19.366437 containerd[1482]: time="2025-02-13T19:52:19.364998248Z" level=error msg="Failed to destroy network for sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:19.366437 containerd[1482]: time="2025-02-13T19:52:19.365482641Z" level=error msg="encountered an error cleaning up failed sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:19.366437 containerd[1482]: time="2025-02-13T19:52:19.365570866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:19.366723 kubelet[1855]: E0213 19:52:19.365871 1855 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:52:19.366723 kubelet[1855]: E0213 19:52:19.365949 1855 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:19.366723 kubelet[1855]: E0213 19:52:19.365983 1855 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-76gzr" Feb 13 19:52:19.366902 kubelet[1855]: E0213 19:52:19.366047 1855 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-76gzr_calico-system(482935c4-4939-47ca-9a60-130d52de95d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-76gzr" podUID="482935c4-4939-47ca-9a60-130d52de95d3" Feb 13 19:52:19.404893 containerd[1482]: time="2025-02-13T19:52:19.404822440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:19.406172 containerd[1482]: time="2025-02-13T19:52:19.406095585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:52:19.407313 containerd[1482]: time="2025-02-13T19:52:19.407199221Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:19.410045 containerd[1482]: time="2025-02-13T19:52:19.409974148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:19.411649 containerd[1482]: time="2025-02-13T19:52:19.410939767Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.425569521s" Feb 13 19:52:19.411649 containerd[1482]: time="2025-02-13T19:52:19.410988242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:52:19.421816 containerd[1482]: time="2025-02-13T19:52:19.421748009Z" level=info msg="CreateContainer within sandbox \"d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:52:19.440405 containerd[1482]: time="2025-02-13T19:52:19.440342990Z" level=info msg="CreateContainer within sandbox \"d853fb0ab8a386a4a590f71b5b37d0c846e4b62270e323dc6571b2412969d1c9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8130977f322bc23d23dcd900cefac103987b478981464c15970631ab9da174de\"" Feb 13 19:52:19.441358 containerd[1482]: time="2025-02-13T19:52:19.441083873Z" level=info msg="StartContainer for \"8130977f322bc23d23dcd900cefac103987b478981464c15970631ab9da174de\"" Feb 13 19:52:19.477591 systemd[1]: Started cri-containerd-8130977f322bc23d23dcd900cefac103987b478981464c15970631ab9da174de.scope - libcontainer container 8130977f322bc23d23dcd900cefac103987b478981464c15970631ab9da174de. Feb 13 19:52:19.528775 containerd[1482]: time="2025-02-13T19:52:19.528722569Z" level=info msg="StartContainer for \"8130977f322bc23d23dcd900cefac103987b478981464c15970631ab9da174de\" returns successfully" Feb 13 19:52:19.555792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a-shm.mount: Deactivated successfully. Feb 13 19:52:19.556262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255-shm.mount: Deactivated successfully. Feb 13 19:52:19.556590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684949253.mount: Deactivated successfully. Feb 13 19:52:19.643075 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:52:19.643324 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:52:19.780316 kubelet[1855]: E0213 19:52:19.780178 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:20.146688 kubelet[1855]: I0213 19:52:20.146541 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a" Feb 13 19:52:20.152677 containerd[1482]: time="2025-02-13T19:52:20.148540784Z" level=info msg="StopPodSandbox for \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\"" Feb 13 19:52:20.152677 containerd[1482]: time="2025-02-13T19:52:20.148868439Z" level=info msg="Ensure that sandbox 20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a in task-service has been cleanup successfully" Feb 13 19:52:20.152677 containerd[1482]: time="2025-02-13T19:52:20.152050052Z" level=info msg="TearDown network for sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\" successfully" Feb 13 19:52:20.152677 containerd[1482]: time="2025-02-13T19:52:20.152083303Z" level=info msg="StopPodSandbox for \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\" returns successfully" Feb 13 19:52:20.155121 systemd[1]: run-netns-cni\x2ddc9a1202\x2dc2e5\x2daef6\x2d7118\x2d6a679216ff3a.mount: Deactivated successfully. Feb 13 19:52:20.161132 containerd[1482]: time="2025-02-13T19:52:20.160421206Z" level=info msg="StopPodSandbox for \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\"" Feb 13 19:52:20.161132 containerd[1482]: time="2025-02-13T19:52:20.160559103Z" level=info msg="TearDown network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\" successfully" Feb 13 19:52:20.161132 containerd[1482]: time="2025-02-13T19:52:20.160576141Z" level=info msg="StopPodSandbox for \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\" returns successfully" Feb 13 19:52:20.161663 containerd[1482]: time="2025-02-13T19:52:20.161444484Z" level=info msg="StopPodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\"" Feb 13 19:52:20.161663 containerd[1482]: time="2025-02-13T19:52:20.161617815Z" level=info msg="TearDown network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" successfully" Feb 13 19:52:20.161663 containerd[1482]: time="2025-02-13T19:52:20.161638935Z" level=info msg="StopPodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" returns successfully" Feb 13 19:52:20.163552 containerd[1482]: time="2025-02-13T19:52:20.163352648Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\"" Feb 13 19:52:20.163552 containerd[1482]: time="2025-02-13T19:52:20.163514416Z" level=info msg="TearDown network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" successfully" Feb 13 19:52:20.164264 containerd[1482]: time="2025-02-13T19:52:20.164024596Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" returns successfully" Feb 13 19:52:20.164975 containerd[1482]: time="2025-02-13T19:52:20.164857298Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" Feb 13 19:52:20.165254 containerd[1482]: time="2025-02-13T19:52:20.165131509Z" level=info msg="TearDown network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" successfully" Feb 13 19:52:20.165254 containerd[1482]: time="2025-02-13T19:52:20.165190703Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" returns successfully" Feb 13 19:52:20.166162 containerd[1482]: time="2025-02-13T19:52:20.166117613Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:20.166502 kubelet[1855]: I0213 19:52:20.166467 1855 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255" Feb 13 19:52:20.166824 containerd[1482]: time="2025-02-13T19:52:20.166702051Z" level=info msg="TearDown network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" successfully" Feb 13 19:52:20.167054 containerd[1482]: time="2025-02-13T19:52:20.166941645Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" returns successfully" Feb 13 19:52:20.169313 containerd[1482]: time="2025-02-13T19:52:20.167711337Z" level=info msg="StopPodSandbox for \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\"" Feb 13 19:52:20.169313 containerd[1482]: time="2025-02-13T19:52:20.167976422Z" level=info msg="Ensure that sandbox e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255 in task-service has been cleanup successfully" Feb 13 19:52:20.169747 containerd[1482]: time="2025-02-13T19:52:20.169709316Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:20.171354 containerd[1482]: time="2025-02-13T19:52:20.170003223Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:20.171603 containerd[1482]: time="2025-02-13T19:52:20.171558689Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:20.171935 containerd[1482]: time="2025-02-13T19:52:20.171533025Z" level=info msg="TearDown network for sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\" successfully" Feb 13 19:52:20.171935 containerd[1482]: time="2025-02-13T19:52:20.171816087Z" level=info msg="StopPodSandbox for \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\" returns successfully" Feb 13 19:52:20.173535 systemd[1]: run-netns-cni\x2d0994d035\x2d56e6\x2d6bbb\x2df0d7\x2d812cf8917ca1.mount: Deactivated successfully. Feb 13 19:52:20.175012 containerd[1482]: time="2025-02-13T19:52:20.174779445Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:20.175583 containerd[1482]: time="2025-02-13T19:52:20.175334250Z" level=info msg="StopPodSandbox for \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\"" Feb 13 19:52:20.175583 containerd[1482]: time="2025-02-13T19:52:20.175454911Z" level=info msg="TearDown network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\" successfully" Feb 13 19:52:20.175583 containerd[1482]: time="2025-02-13T19:52:20.175474583Z" level=info msg="StopPodSandbox for \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\" returns successfully" Feb 13 19:52:20.176142 containerd[1482]: time="2025-02-13T19:52:20.175968233Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:20.176142 containerd[1482]: time="2025-02-13T19:52:20.176048573Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:20.177012 containerd[1482]: time="2025-02-13T19:52:20.176834474Z" level=info msg="StopPodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\"" Feb 13 19:52:20.177012 containerd[1482]: time="2025-02-13T19:52:20.176956964Z" level=info msg="TearDown network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" successfully" Feb 13 19:52:20.177012 containerd[1482]: time="2025-02-13T19:52:20.176974807Z" level=info msg="StopPodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" returns successfully" Feb 13 19:52:20.177218 containerd[1482]: time="2025-02-13T19:52:20.177056376Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:20.177218 containerd[1482]: time="2025-02-13T19:52:20.177159253Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:20.177218 containerd[1482]: time="2025-02-13T19:52:20.177174160Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:20.178355 containerd[1482]: time="2025-02-13T19:52:20.178313264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:9,}" Feb 13 19:52:20.178615 containerd[1482]: time="2025-02-13T19:52:20.178567142Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\"" Feb 13 19:52:20.178955 containerd[1482]: time="2025-02-13T19:52:20.178813670Z" level=info msg="TearDown network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" successfully" Feb 13 19:52:20.178955 containerd[1482]: time="2025-02-13T19:52:20.178836451Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" returns successfully" Feb 13 19:52:20.179933 containerd[1482]: time="2025-02-13T19:52:20.179719891Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" Feb 13 19:52:20.179933 containerd[1482]: time="2025-02-13T19:52:20.179841522Z" level=info msg="TearDown network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" successfully" Feb 13 19:52:20.179933 containerd[1482]: time="2025-02-13T19:52:20.179860243Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" returns successfully" Feb 13 19:52:20.181668 containerd[1482]: time="2025-02-13T19:52:20.180921726Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:20.182367 containerd[1482]: time="2025-02-13T19:52:20.182334283Z" level=info msg="TearDown network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" successfully" Feb 13 19:52:20.183681 containerd[1482]: time="2025-02-13T19:52:20.183327466Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" returns successfully" Feb 13 19:52:20.184421 containerd[1482]: time="2025-02-13T19:52:20.183956357Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:20.184421 containerd[1482]: time="2025-02-13T19:52:20.184072477Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:20.184421 containerd[1482]: time="2025-02-13T19:52:20.184101631Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:20.184853 containerd[1482]: time="2025-02-13T19:52:20.184825913Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:20.185293 containerd[1482]: time="2025-02-13T19:52:20.185252140Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:20.186228 containerd[1482]: time="2025-02-13T19:52:20.186185298Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:20.187783 containerd[1482]: time="2025-02-13T19:52:20.187255976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:8,}" Feb 13 19:52:20.434647 systemd-networkd[1395]: cali22f52e0007a: Link UP Feb 13 19:52:20.437032 systemd-networkd[1395]: cali22f52e0007a: Gained carrier Feb 13 19:52:20.454533 kubelet[1855]: I0213 19:52:20.454456 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bfchc" podStartSLOduration=5.309855658 podStartE2EDuration="22.454418942s" podCreationTimestamp="2025-02-13 19:51:58 +0000 UTC" firstStartedPulling="2025-02-13 19:52:02.2676766 +0000 UTC m=+4.279039236" lastFinishedPulling="2025-02-13 19:52:19.412239879 +0000 UTC m=+21.423602520" observedRunningTime="2025-02-13 19:52:20.156841766 +0000 UTC m=+22.168204411" watchObservedRunningTime="2025-02-13 19:52:20.454418942 +0000 UTC m=+22.465781573" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.297 [INFO][2948] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.318 [INFO][2948] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.69-k8s-csi--node--driver--76gzr-eth0 csi-node-driver- calico-system 482935c4-4939-47ca-9a60-130d52de95d3 1015 0 2025-02-13 19:51:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.128.0.69 csi-node-driver-76gzr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali22f52e0007a [] []}} ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Namespace="calico-system" Pod="csi-node-driver-76gzr" WorkloadEndpoint="10.128.0.69-k8s-csi--node--driver--76gzr-" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.319 [INFO][2948] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Namespace="calico-system" Pod="csi-node-driver-76gzr" WorkloadEndpoint="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.368 [INFO][2972] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" HandleID="k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Workload="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.381 [INFO][2972] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" HandleID="k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Workload="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004dc590), Attrs:map[string]string{"namespace":"calico-system", "node":"10.128.0.69", "pod":"csi-node-driver-76gzr", "timestamp":"2025-02-13 19:52:20.368822344 +0000 UTC"}, Hostname:"10.128.0.69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.381 [INFO][2972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.381 [INFO][2972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.381 [INFO][2972] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.69' Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.383 [INFO][2972] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.389 [INFO][2972] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.395 [INFO][2972] ipam/ipam.go 489: Trying affinity for 192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.397 [INFO][2972] ipam/ipam.go 155: Attempting to load block cidr=192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.400 [INFO][2972] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.401 [INFO][2972] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.403 [INFO][2972] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.408 [INFO][2972] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.417 [INFO][2972] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.113.193/26] block=192.168.113.192/26 handle="k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.417 [INFO][2972] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.193/26] handle="k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" host="10.128.0.69" Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.417 [INFO][2972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:52:20.456650 containerd[1482]: 2025-02-13 19:52:20.417 [INFO][2972] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.193/26] IPv6=[] ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" HandleID="k8s-pod-network.2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Workload="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" Feb 13 19:52:20.458036 containerd[1482]: 2025-02-13 19:52:20.420 [INFO][2948] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Namespace="calico-system" Pod="csi-node-driver-76gzr" WorkloadEndpoint="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.69-k8s-csi--node--driver--76gzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"482935c4-4939-47ca-9a60-130d52de95d3", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.69", ContainerID:"", Pod:"csi-node-driver-76gzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22f52e0007a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:20.458036 containerd[1482]: 2025-02-13 19:52:20.420 [INFO][2948] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.113.193/32] ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Namespace="calico-system" Pod="csi-node-driver-76gzr" WorkloadEndpoint="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" Feb 13 19:52:20.458036 containerd[1482]: 2025-02-13 19:52:20.420 [INFO][2948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22f52e0007a ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Namespace="calico-system" Pod="csi-node-driver-76gzr" WorkloadEndpoint="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" Feb 13 19:52:20.458036 containerd[1482]: 2025-02-13 19:52:20.435 [INFO][2948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Namespace="calico-system" Pod="csi-node-driver-76gzr" WorkloadEndpoint="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" Feb 13 19:52:20.458036 containerd[1482]: 2025-02-13 19:52:20.441 [INFO][2948] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Namespace="calico-system" Pod="csi-node-driver-76gzr" WorkloadEndpoint="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.69-k8s-csi--node--driver--76gzr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"482935c4-4939-47ca-9a60-130d52de95d3", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.69", ContainerID:"2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad", Pod:"csi-node-driver-76gzr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali22f52e0007a", MAC:"ce:82:f8:2f:75:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:20.458036 containerd[1482]: 2025-02-13 19:52:20.454 [INFO][2948] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad" Namespace="calico-system" Pod="csi-node-driver-76gzr" WorkloadEndpoint="10.128.0.69-k8s-csi--node--driver--76gzr-eth0" Feb 13 19:52:20.482605 systemd-networkd[1395]: calicfcd56df5d9: Link UP Feb 13 19:52:20.482974 systemd-networkd[1395]: calicfcd56df5d9: Gained carrier Feb 13 19:52:20.501880 containerd[1482]: time="2025-02-13T19:52:20.501662005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.300 [INFO][2957] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.322 [INFO][2957] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0 nginx-deployment-85f456d6dd- default 8a69c858-fead-44ad-aa24-1c0fd99da2c3 1097 0 2025-02-13 19:52:12 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.128.0.69 nginx-deployment-85f456d6dd-ksk75 eth0 default [] [] [kns.default ksa.default.default] calicfcd56df5d9 [] []}} ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Namespace="default" Pod="nginx-deployment-85f456d6dd-ksk75" WorkloadEndpoint="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.322 [INFO][2957] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Namespace="default" Pod="nginx-deployment-85f456d6dd-ksk75" WorkloadEndpoint="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.370 [INFO][2976] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" HandleID="k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Workload="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.385 [INFO][2976] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" HandleID="k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Workload="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334db0), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.69", "pod":"nginx-deployment-85f456d6dd-ksk75", "timestamp":"2025-02-13 19:52:20.37003334 +0000 UTC"}, Hostname:"10.128.0.69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.386 [INFO][2976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.417 [INFO][2976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.417 [INFO][2976] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.69' Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.420 [INFO][2976] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.426 [INFO][2976] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.438 [INFO][2976] ipam/ipam.go 489: Trying affinity for 192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.444 [INFO][2976] ipam/ipam.go 155: Attempting to load block cidr=192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.449 [INFO][2976] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.450 [INFO][2976] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.453 [INFO][2976] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041 Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.461 [INFO][2976] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.470 [INFO][2976] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.113.194/26] block=192.168.113.192/26 handle="k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.470 [INFO][2976] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.194/26] handle="k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" host="10.128.0.69" Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.470 [INFO][2976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:52:20.502158 containerd[1482]: 2025-02-13 19:52:20.470 [INFO][2976] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.194/26] IPv6=[] ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" HandleID="k8s-pod-network.61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Workload="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" Feb 13 19:52:20.503898 containerd[1482]: 2025-02-13 19:52:20.473 [INFO][2957] cni-plugin/k8s.go 386: Populated endpoint ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Namespace="default" Pod="nginx-deployment-85f456d6dd-ksk75" WorkloadEndpoint="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8a69c858-fead-44ad-aa24-1c0fd99da2c3", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.69", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-ksk75", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calicfcd56df5d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:20.503898 containerd[1482]: 2025-02-13 19:52:20.474 [INFO][2957] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.113.194/32] ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Namespace="default" Pod="nginx-deployment-85f456d6dd-ksk75" WorkloadEndpoint="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" Feb 13 19:52:20.503898 containerd[1482]: 2025-02-13 19:52:20.474 [INFO][2957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfcd56df5d9 ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Namespace="default" Pod="nginx-deployment-85f456d6dd-ksk75" WorkloadEndpoint="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" Feb 13 19:52:20.503898 containerd[1482]: 2025-02-13 19:52:20.481 [INFO][2957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Namespace="default" Pod="nginx-deployment-85f456d6dd-ksk75" WorkloadEndpoint="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" Feb 13 19:52:20.503898 containerd[1482]: 2025-02-13 19:52:20.481 [INFO][2957] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Namespace="default" Pod="nginx-deployment-85f456d6dd-ksk75" WorkloadEndpoint="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"8a69c858-fead-44ad-aa24-1c0fd99da2c3", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 52, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.69", ContainerID:"61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041", Pod:"nginx-deployment-85f456d6dd-ksk75", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calicfcd56df5d9", MAC:"be:71:d6:1a:17:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:20.503898 containerd[1482]: 2025-02-13 19:52:20.499 [INFO][2957] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041" Namespace="default" Pod="nginx-deployment-85f456d6dd-ksk75" WorkloadEndpoint="10.128.0.69-k8s-nginx--deployment--85f456d6dd--ksk75-eth0" Feb 13 19:52:20.503898 containerd[1482]: time="2025-02-13T19:52:20.502369474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:20.504930 containerd[1482]: time="2025-02-13T19:52:20.502464005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:20.504930 containerd[1482]: time="2025-02-13T19:52:20.504645870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:20.535569 systemd[1]: Started cri-containerd-2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad.scope - libcontainer container 2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad. Feb 13 19:52:20.562414 containerd[1482]: time="2025-02-13T19:52:20.561925949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:20.562414 containerd[1482]: time="2025-02-13T19:52:20.562009858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:20.562414 containerd[1482]: time="2025-02-13T19:52:20.562037953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:20.562414 containerd[1482]: time="2025-02-13T19:52:20.562169182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:20.609248 containerd[1482]: time="2025-02-13T19:52:20.609170945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76gzr,Uid:482935c4-4939-47ca-9a60-130d52de95d3,Namespace:calico-system,Attempt:9,} returns sandbox id \"2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad\"" Feb 13 19:52:20.612915 containerd[1482]: time="2025-02-13T19:52:20.612542051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:52:20.613608 systemd[1]: Started cri-containerd-61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041.scope - libcontainer container 61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041. Feb 13 19:52:20.668222 containerd[1482]: time="2025-02-13T19:52:20.668173357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ksk75,Uid:8a69c858-fead-44ad-aa24-1c0fd99da2c3,Namespace:default,Attempt:8,} returns sandbox id \"61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041\"" Feb 13 19:52:20.780854 kubelet[1855]: E0213 19:52:20.780789 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:21.431319 kernel: bpftool[3227]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:52:21.639296 systemd-networkd[1395]: calicfcd56df5d9: Gained IPv6LL Feb 13 19:52:21.781646 kubelet[1855]: E0213 19:52:21.781588 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:21.790640 systemd-networkd[1395]: vxlan.calico: Link UP Feb 13 19:52:21.790653 systemd-networkd[1395]: vxlan.calico: Gained carrier Feb 13 19:52:21.959101 containerd[1482]: time="2025-02-13T19:52:21.959033907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:21.961346 containerd[1482]: time="2025-02-13T19:52:21.961213897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:52:21.963314 containerd[1482]: time="2025-02-13T19:52:21.962962307Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:21.968134 containerd[1482]: time="2025-02-13T19:52:21.968074325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:21.969553 containerd[1482]: time="2025-02-13T19:52:21.969084844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.35648791s" Feb 13 19:52:21.969553 containerd[1482]: time="2025-02-13T19:52:21.969423781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:52:21.974102 containerd[1482]: time="2025-02-13T19:52:21.973608713Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:52:21.974713 containerd[1482]: time="2025-02-13T19:52:21.974532874Z" level=info msg="CreateContainer within sandbox \"2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:52:22.001423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292472808.mount: Deactivated successfully. Feb 13 19:52:22.009139 containerd[1482]: time="2025-02-13T19:52:22.008649343Z" level=info msg="CreateContainer within sandbox \"2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"72de44c3867b2ef02ecf3b5961c4463be01c435ea225fe16ea57793d69d0e0b2\"" Feb 13 19:52:22.009525 containerd[1482]: time="2025-02-13T19:52:22.009476927Z" level=info msg="StartContainer for \"72de44c3867b2ef02ecf3b5961c4463be01c435ea225fe16ea57793d69d0e0b2\"" Feb 13 19:52:22.077998 systemd[1]: Started cri-containerd-72de44c3867b2ef02ecf3b5961c4463be01c435ea225fe16ea57793d69d0e0b2.scope - libcontainer container 72de44c3867b2ef02ecf3b5961c4463be01c435ea225fe16ea57793d69d0e0b2. Feb 13 19:52:22.138896 containerd[1482]: time="2025-02-13T19:52:22.138718914Z" level=info msg="StartContainer for \"72de44c3867b2ef02ecf3b5961c4463be01c435ea225fe16ea57793d69d0e0b2\" returns successfully" Feb 13 19:52:22.150636 systemd-networkd[1395]: cali22f52e0007a: Gained IPv6LL Feb 13 19:52:22.782097 kubelet[1855]: E0213 19:52:22.782027 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:23.431213 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Feb 13 19:52:23.782864 kubelet[1855]: E0213 19:52:23.782622 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:24.698529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103941054.mount: Deactivated successfully. Feb 13 19:52:24.783852 kubelet[1855]: E0213 19:52:24.783556 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:25.783860 kubelet[1855]: E0213 19:52:25.783711 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:25.911668 ntpd[1453]: Listen normally on 8 vxlan.calico 192.168.113.192:123 Feb 13 19:52:25.912642 ntpd[1453]: 13 Feb 19:52:25 ntpd[1453]: Listen normally on 8 vxlan.calico 192.168.113.192:123 Feb 13 19:52:25.912642 ntpd[1453]: 13 Feb 19:52:25 ntpd[1453]: Listen normally on 9 cali22f52e0007a [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:52:25.912642 ntpd[1453]: 13 Feb 19:52:25 ntpd[1453]: Listen normally on 10 calicfcd56df5d9 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:52:25.912642 ntpd[1453]: 13 Feb 19:52:25 ntpd[1453]: Listen normally on 11 vxlan.calico [fe80::64d5:31ff:fe6e:ec99%5]:123 Feb 13 19:52:25.911813 ntpd[1453]: Listen normally on 9 cali22f52e0007a [fe80::ecee:eeff:feee:eeee%3]:123 Feb 13 19:52:25.911898 ntpd[1453]: Listen normally on 10 calicfcd56df5d9 [fe80::ecee:eeff:feee:eeee%4]:123 Feb 13 19:52:25.911956 ntpd[1453]: Listen normally on 11 vxlan.calico [fe80::64d5:31ff:fe6e:ec99%5]:123 Feb 13 19:52:26.218315 containerd[1482]: time="2025-02-13T19:52:26.218115130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:26.220080 containerd[1482]: time="2025-02-13T19:52:26.219972000Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:52:26.222132 containerd[1482]: time="2025-02-13T19:52:26.222052772Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:26.226435 containerd[1482]: time="2025-02-13T19:52:26.226328616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:26.228169 containerd[1482]: time="2025-02-13T19:52:26.227992416Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.254336379s" Feb 13 19:52:26.228169 containerd[1482]: time="2025-02-13T19:52:26.228045127Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:52:26.230350 containerd[1482]: time="2025-02-13T19:52:26.229845333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:52:26.231860 containerd[1482]: time="2025-02-13T19:52:26.231812556Z" level=info msg="CreateContainer within sandbox \"61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:52:26.250145 containerd[1482]: time="2025-02-13T19:52:26.250085088Z" level=info msg="CreateContainer within sandbox \"61d90b7ee05eb7a7d9ab70c991bf9839904daa1e2f7ac1e546558114f7b51041\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c279240b4db545aebcce12784f9cbb3e58d878599c1ee9b02eea3e992b4d26c7\"" Feb 13 19:52:26.250912 containerd[1482]: time="2025-02-13T19:52:26.250863787Z" level=info msg="StartContainer for \"c279240b4db545aebcce12784f9cbb3e58d878599c1ee9b02eea3e992b4d26c7\"" Feb 13 19:52:26.295572 systemd[1]: Started cri-containerd-c279240b4db545aebcce12784f9cbb3e58d878599c1ee9b02eea3e992b4d26c7.scope - libcontainer container c279240b4db545aebcce12784f9cbb3e58d878599c1ee9b02eea3e992b4d26c7. Feb 13 19:52:26.332749 containerd[1482]: time="2025-02-13T19:52:26.332579103Z" level=info msg="StartContainer for \"c279240b4db545aebcce12784f9cbb3e58d878599c1ee9b02eea3e992b4d26c7\" returns successfully" Feb 13 19:52:26.784270 kubelet[1855]: E0213 19:52:26.784199 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:27.420105 containerd[1482]: time="2025-02-13T19:52:27.420028421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:27.421587 containerd[1482]: time="2025-02-13T19:52:27.421500602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:52:27.423313 containerd[1482]: time="2025-02-13T19:52:27.423208242Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:27.432320 containerd[1482]: time="2025-02-13T19:52:27.430462338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:27.432320 containerd[1482]: time="2025-02-13T19:52:27.431971271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.202049757s" Feb 13 19:52:27.432320 containerd[1482]: time="2025-02-13T19:52:27.432019010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:52:27.437556 containerd[1482]: time="2025-02-13T19:52:27.437507620Z" level=info msg="CreateContainer within sandbox \"2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:52:27.461130 containerd[1482]: time="2025-02-13T19:52:27.461070101Z" level=info msg="CreateContainer within sandbox \"2a68cd0a61219d175080a781d5c11742a62dc6ef2fb9ad7b15b28a474d14ffad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ee04264ee9615c50da02b15e190951a3715f948ab5bc4e7c51ba352e4f880156\"" Feb 13 19:52:27.461839 containerd[1482]: time="2025-02-13T19:52:27.461786641Z" level=info msg="StartContainer for \"ee04264ee9615c50da02b15e190951a3715f948ab5bc4e7c51ba352e4f880156\"" Feb 13 19:52:27.513561 systemd[1]: Started cri-containerd-ee04264ee9615c50da02b15e190951a3715f948ab5bc4e7c51ba352e4f880156.scope - libcontainer container ee04264ee9615c50da02b15e190951a3715f948ab5bc4e7c51ba352e4f880156. Feb 13 19:52:27.562885 containerd[1482]: time="2025-02-13T19:52:27.562717565Z" level=info msg="StartContainer for \"ee04264ee9615c50da02b15e190951a3715f948ab5bc4e7c51ba352e4f880156\" returns successfully" Feb 13 19:52:27.784974 kubelet[1855]: E0213 19:52:27.784900 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:27.903521 kubelet[1855]: I0213 19:52:27.903386 1855 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:52:27.903521 kubelet[1855]: I0213 19:52:27.903431 1855 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:52:28.246953 kubelet[1855]: I0213 19:52:28.246860 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-76gzr" podStartSLOduration=22.423393671 podStartE2EDuration="29.246841371s" podCreationTimestamp="2025-02-13 19:51:59 +0000 UTC" firstStartedPulling="2025-02-13 19:52:20.61188652 +0000 UTC m=+22.623249149" lastFinishedPulling="2025-02-13 19:52:27.435334225 +0000 UTC m=+29.446696849" observedRunningTime="2025-02-13 19:52:28.246665387 +0000 UTC m=+30.258028035" watchObservedRunningTime="2025-02-13 19:52:28.246841371 +0000 UTC m=+30.258204018" Feb 13 19:52:28.247242 kubelet[1855]: I0213 19:52:28.247074 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-ksk75" podStartSLOduration=10.687722256 podStartE2EDuration="16.247064375s" podCreationTimestamp="2025-02-13 19:52:12 +0000 UTC" firstStartedPulling="2025-02-13 19:52:20.670206115 +0000 UTC m=+22.681568747" lastFinishedPulling="2025-02-13 19:52:26.22954823 +0000 UTC m=+28.240910866" observedRunningTime="2025-02-13 19:52:27.231206692 +0000 UTC m=+29.242569338" watchObservedRunningTime="2025-02-13 19:52:28.247064375 +0000 UTC m=+30.258427023" Feb 13 19:52:28.785996 kubelet[1855]: E0213 19:52:28.785937 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:29.682486 update_engine[1465]: I20250213 19:52:29.682356 1465 update_attempter.cc:509] Updating boot flags... Feb 13 19:52:29.751338 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3478) Feb 13 19:52:29.788333 kubelet[1855]: E0213 19:52:29.787382 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:29.872346 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3482) Feb 13 19:52:29.988404 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (3482) Feb 13 19:52:30.787837 kubelet[1855]: E0213 19:52:30.787766 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:31.788988 kubelet[1855]: E0213 19:52:31.788922 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:32.790148 kubelet[1855]: E0213 19:52:32.790074 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:32.978692 kubelet[1855]: I0213 19:52:32.978638 1855 topology_manager.go:215] "Topology Admit Handler" podUID="5c878391-5bba-4bde-8c52-c5dbce9960c2" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 19:52:32.986848 systemd[1]: Created slice kubepods-besteffort-pod5c878391_5bba_4bde_8c52_c5dbce9960c2.slice - libcontainer container kubepods-besteffort-pod5c878391_5bba_4bde_8c52_c5dbce9960c2.slice. Feb 13 19:52:33.135433 kubelet[1855]: I0213 19:52:33.135252 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5c878391-5bba-4bde-8c52-c5dbce9960c2-data\") pod \"nfs-server-provisioner-0\" (UID: \"5c878391-5bba-4bde-8c52-c5dbce9960c2\") " pod="default/nfs-server-provisioner-0" Feb 13 19:52:33.136137 kubelet[1855]: I0213 19:52:33.135752 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86vn\" (UniqueName: \"kubernetes.io/projected/5c878391-5bba-4bde-8c52-c5dbce9960c2-kube-api-access-f86vn\") pod \"nfs-server-provisioner-0\" (UID: \"5c878391-5bba-4bde-8c52-c5dbce9960c2\") " pod="default/nfs-server-provisioner-0" Feb 13 19:52:33.291845 containerd[1482]: time="2025-02-13T19:52:33.291761168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5c878391-5bba-4bde-8c52-c5dbce9960c2,Namespace:default,Attempt:0,}" Feb 13 19:52:33.448335 systemd-networkd[1395]: cali60e51b789ff: Link UP Feb 13 19:52:33.451067 systemd-networkd[1395]: cali60e51b789ff: Gained carrier Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.359 [INFO][3506] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.69-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5c878391-5bba-4bde-8c52-c5dbce9960c2 1239 0 2025-02-13 19:52:32 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.128.0.69 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.69-k8s-nfs--server--provisioner--0-" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.360 [INFO][3506] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.394 [INFO][3516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" HandleID="k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Workload="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.408 [INFO][3516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" HandleID="k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Workload="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.69", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:52:33.394477561 +0000 UTC"}, Hostname:"10.128.0.69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.408 [INFO][3516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.408 [INFO][3516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.408 [INFO][3516] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.69' Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.410 [INFO][3516] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.415 [INFO][3516] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.420 [INFO][3516] ipam/ipam.go 489: Trying affinity for 192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.422 [INFO][3516] ipam/ipam.go 155: Attempting to load block cidr=192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.425 [INFO][3516] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.425 [INFO][3516] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.427 [INFO][3516] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.432 [INFO][3516] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.440 [INFO][3516] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.113.195/26] block=192.168.113.192/26 handle="k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.440 [INFO][3516] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.195/26] handle="k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" host="10.128.0.69" Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.440 [INFO][3516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:52:33.470910 containerd[1482]: 2025-02-13 19:52:33.440 [INFO][3516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.195/26] IPv6=[] ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" HandleID="k8s-pod-network.38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Workload="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:52:33.472485 containerd[1482]: 2025-02-13 19:52:33.442 [INFO][3506] cni-plugin/k8s.go 386: Populated endpoint ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.69-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5c878391-5bba-4bde-8c52-c5dbce9960c2", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 52, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.69", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.113.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:33.472485 containerd[1482]: 2025-02-13 19:52:33.443 [INFO][3506] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.113.195/32] ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:52:33.472485 containerd[1482]: 2025-02-13 19:52:33.443 [INFO][3506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:52:33.472485 containerd[1482]: 2025-02-13 19:52:33.452 [INFO][3506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:52:33.472880 containerd[1482]: 2025-02-13 19:52:33.453 [INFO][3506] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.69-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5c878391-5bba-4bde-8c52-c5dbce9960c2", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 52, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.69", ContainerID:"38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.113.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"1a:53:17:ec:da:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:33.472880 containerd[1482]: 2025-02-13 19:52:33.469 [INFO][3506] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.69-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:52:33.509566 containerd[1482]: time="2025-02-13T19:52:33.508565919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:33.509566 containerd[1482]: time="2025-02-13T19:52:33.509352156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:33.509566 containerd[1482]: time="2025-02-13T19:52:33.509382576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:33.509959 containerd[1482]: time="2025-02-13T19:52:33.509580924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:33.543601 systemd[1]: Started cri-containerd-38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb.scope - libcontainer container 38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb. Feb 13 19:52:33.600643 containerd[1482]: time="2025-02-13T19:52:33.600591598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5c878391-5bba-4bde-8c52-c5dbce9960c2,Namespace:default,Attempt:0,} returns sandbox id \"38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb\"" Feb 13 19:52:33.602930 containerd[1482]: time="2025-02-13T19:52:33.602790336Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:52:33.790957 kubelet[1855]: E0213 19:52:33.790885 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:34.696055 systemd-networkd[1395]: cali60e51b789ff: Gained IPv6LL Feb 13 19:52:34.791721 kubelet[1855]: E0213 19:52:34.791222 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:35.791976 kubelet[1855]: E0213 19:52:35.791923 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:36.029950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1405657369.mount: Deactivated successfully. Feb 13 19:52:36.793516 kubelet[1855]: E0213 19:52:36.793466 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:36.911568 ntpd[1453]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:52:36.913222 ntpd[1453]: 13 Feb 19:52:36 ntpd[1453]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Feb 13 19:52:37.795308 kubelet[1855]: E0213 19:52:37.794665 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:38.760466 kubelet[1855]: E0213 19:52:38.760387 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:38.795440 kubelet[1855]: E0213 19:52:38.795373 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:39.436439 containerd[1482]: time="2025-02-13T19:52:39.436367353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:39.438089 containerd[1482]: time="2025-02-13T19:52:39.438013412Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91045236" Feb 13 19:52:39.439458 containerd[1482]: time="2025-02-13T19:52:39.439360474Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:39.443186 containerd[1482]: time="2025-02-13T19:52:39.443056910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:39.444755 containerd[1482]: time="2025-02-13T19:52:39.444493924Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.841652566s" Feb 13 19:52:39.444755 containerd[1482]: time="2025-02-13T19:52:39.444543243Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:52:39.448198 containerd[1482]: time="2025-02-13T19:52:39.448138427Z" level=info msg="CreateContainer within sandbox \"38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:52:39.469117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032419262.mount: Deactivated successfully. Feb 13 19:52:39.471353 containerd[1482]: time="2025-02-13T19:52:39.471264414Z" level=info msg="CreateContainer within sandbox \"38d2e8eb1a14ccefd51c3412b26e8a6d5acdaee9f3c83ea4fd34114ae16c05eb\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"282d8491d707f677877d5963dbe07557e2c373b0511798fd774005b480910a51\"" Feb 13 19:52:39.472371 containerd[1482]: time="2025-02-13T19:52:39.472328378Z" level=info msg="StartContainer for \"282d8491d707f677877d5963dbe07557e2c373b0511798fd774005b480910a51\"" Feb 13 19:52:39.520557 systemd[1]: Started cri-containerd-282d8491d707f677877d5963dbe07557e2c373b0511798fd774005b480910a51.scope - libcontainer container 282d8491d707f677877d5963dbe07557e2c373b0511798fd774005b480910a51. Feb 13 19:52:39.556335 containerd[1482]: time="2025-02-13T19:52:39.556230437Z" level=info msg="StartContainer for \"282d8491d707f677877d5963dbe07557e2c373b0511798fd774005b480910a51\" returns successfully" Feb 13 19:52:39.796461 kubelet[1855]: E0213 19:52:39.796405 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:40.797076 kubelet[1855]: E0213 19:52:40.797001 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:41.797499 kubelet[1855]: E0213 19:52:41.797426 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:42.798392 kubelet[1855]: E0213 19:52:42.798322 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:43.798922 kubelet[1855]: E0213 19:52:43.798858 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:44.799723 kubelet[1855]: E0213 19:52:44.799668 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:45.800497 kubelet[1855]: E0213 19:52:45.800419 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:46.499785 kubelet[1855]: I0213 19:52:46.499708 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=8.655991636 podStartE2EDuration="14.49968577s" podCreationTimestamp="2025-02-13 19:52:32 +0000 UTC" firstStartedPulling="2025-02-13 19:52:33.602270571 +0000 UTC m=+35.613633212" lastFinishedPulling="2025-02-13 19:52:39.445964707 +0000 UTC m=+41.457327346" observedRunningTime="2025-02-13 19:52:40.370811918 +0000 UTC m=+42.382174564" watchObservedRunningTime="2025-02-13 19:52:46.49968577 +0000 UTC m=+48.511048416" Feb 13 19:52:46.801249 kubelet[1855]: E0213 19:52:46.801068 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:47.802306 kubelet[1855]: E0213 19:52:47.802231 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:48.803189 kubelet[1855]: E0213 19:52:48.803112 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:48.878895 kubelet[1855]: I0213 19:52:48.878828 1855 topology_manager.go:215] "Topology Admit Handler" podUID="60893af0-f2fa-425d-8ba0-58af38f16bbf" podNamespace="default" podName="test-pod-1" Feb 13 19:52:48.887000 systemd[1]: Created slice kubepods-besteffort-pod60893af0_f2fa_425d_8ba0_58af38f16bbf.slice - libcontainer container kubepods-besteffort-pod60893af0_f2fa_425d_8ba0_58af38f16bbf.slice. Feb 13 19:52:49.036964 kubelet[1855]: I0213 19:52:49.036889 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-56217c3e-d018-4024-898a-064c83ab1899\" (UniqueName: \"kubernetes.io/nfs/60893af0-f2fa-425d-8ba0-58af38f16bbf-pvc-56217c3e-d018-4024-898a-064c83ab1899\") pod \"test-pod-1\" (UID: \"60893af0-f2fa-425d-8ba0-58af38f16bbf\") " pod="default/test-pod-1" Feb 13 19:52:49.036964 kubelet[1855]: I0213 19:52:49.036954 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdmt\" (UniqueName: \"kubernetes.io/projected/60893af0-f2fa-425d-8ba0-58af38f16bbf-kube-api-access-zzdmt\") pod \"test-pod-1\" (UID: \"60893af0-f2fa-425d-8ba0-58af38f16bbf\") " pod="default/test-pod-1" Feb 13 19:52:49.178319 kernel: FS-Cache: Loaded Feb 13 19:52:49.270432 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:52:49.270599 kernel: RPC: Registered udp transport module. Feb 13 19:52:49.270640 kernel: RPC: Registered tcp transport module. Feb 13 19:52:49.275162 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:52:49.280855 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:52:49.646460 kernel: NFS: Registering the id_resolver key type Feb 13 19:52:49.646645 kernel: Key type id_resolver registered Feb 13 19:52:49.646687 kernel: Key type id_legacy registered Feb 13 19:52:49.694890 nfsidmap[3728]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Feb 13 19:52:49.705346 nfsidmap[3729]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Feb 13 19:52:49.791731 containerd[1482]: time="2025-02-13T19:52:49.791677185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:60893af0-f2fa-425d-8ba0-58af38f16bbf,Namespace:default,Attempt:0,}" Feb 13 19:52:49.806365 kubelet[1855]: E0213 19:52:49.804598 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:49.950711 systemd-networkd[1395]: cali5ec59c6bf6e: Link UP Feb 13 19:52:49.951795 systemd-networkd[1395]: cali5ec59c6bf6e: Gained carrier Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.855 [INFO][3731] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.69-k8s-test--pod--1-eth0 default 60893af0-f2fa-425d-8ba0-58af38f16bbf 1301 0 2025-02-13 19:52:33 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.128.0.69 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.69-k8s-test--pod--1-" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.856 [INFO][3731] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.69-k8s-test--pod--1-eth0" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.893 [INFO][3741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" HandleID="k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Workload="10.128.0.69-k8s-test--pod--1-eth0" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.907 [INFO][3741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" HandleID="k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Workload="10.128.0.69-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edd90), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.69", "pod":"test-pod-1", "timestamp":"2025-02-13 19:52:49.893851775 +0000 UTC"}, Hostname:"10.128.0.69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.907 [INFO][3741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.907 [INFO][3741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.907 [INFO][3741] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.69' Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.909 [INFO][3741] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.914 [INFO][3741] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.920 [INFO][3741] ipam/ipam.go 489: Trying affinity for 192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.923 [INFO][3741] ipam/ipam.go 155: Attempting to load block cidr=192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.927 [INFO][3741] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.927 [INFO][3741] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.929 [INFO][3741] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823 Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.937 [INFO][3741] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.945 [INFO][3741] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.113.196/26] block=192.168.113.192/26 handle="k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.945 [INFO][3741] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.196/26] handle="k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" host="10.128.0.69" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.945 [INFO][3741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.945 [INFO][3741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.196/26] IPv6=[] ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" HandleID="k8s-pod-network.6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Workload="10.128.0.69-k8s-test--pod--1-eth0" Feb 13 19:52:49.966605 containerd[1482]: 2025-02-13 19:52:49.947 [INFO][3731] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.69-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.69-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"60893af0-f2fa-425d-8ba0-58af38f16bbf", ResourceVersion:"1301", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.69", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:49.971155 containerd[1482]: 2025-02-13 19:52:49.947 [INFO][3731] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.113.196/32] ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.69-k8s-test--pod--1-eth0" Feb 13 19:52:49.971155 containerd[1482]: 2025-02-13 19:52:49.947 [INFO][3731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.69-k8s-test--pod--1-eth0" Feb 13 19:52:49.971155 containerd[1482]: 2025-02-13 19:52:49.951 [INFO][3731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.69-k8s-test--pod--1-eth0" Feb 13 19:52:49.971155 containerd[1482]: 2025-02-13 19:52:49.952 [INFO][3731] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.69-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.69-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"60893af0-f2fa-425d-8ba0-58af38f16bbf", ResourceVersion:"1301", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 52, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.69", ContainerID:"6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"56:3f:53:1b:a1:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:49.971155 containerd[1482]: 2025-02-13 19:52:49.962 [INFO][3731] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.69-k8s-test--pod--1-eth0" Feb 13 19:52:50.006433 containerd[1482]: time="2025-02-13T19:52:50.006150147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:50.006433 containerd[1482]: time="2025-02-13T19:52:50.006220034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:50.006433 containerd[1482]: time="2025-02-13T19:52:50.006233367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:50.006433 containerd[1482]: time="2025-02-13T19:52:50.006372558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:50.032638 systemd[1]: Started cri-containerd-6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823.scope - libcontainer container 6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823. Feb 13 19:52:50.087971 containerd[1482]: time="2025-02-13T19:52:50.087886995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:60893af0-f2fa-425d-8ba0-58af38f16bbf,Namespace:default,Attempt:0,} returns sandbox id \"6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823\"" Feb 13 19:52:50.090803 containerd[1482]: time="2025-02-13T19:52:50.090754448Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:52:50.351981 containerd[1482]: time="2025-02-13T19:52:50.351915369Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:50.353159 containerd[1482]: time="2025-02-13T19:52:50.353087202Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:52:50.358300 containerd[1482]: time="2025-02-13T19:52:50.358205638Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 267.409512ms" Feb 13 19:52:50.358300 containerd[1482]: time="2025-02-13T19:52:50.358264715Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:52:50.361224 containerd[1482]: time="2025-02-13T19:52:50.361180690Z" level=info msg="CreateContainer within sandbox \"6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:52:50.384111 containerd[1482]: time="2025-02-13T19:52:50.383990079Z" level=info msg="CreateContainer within sandbox \"6cb8edd6ffc4cb235af58ba074584e3c76409be6e986c45eb32a6ef43ab96823\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"69a4b980e1c8153281bb9a9f963ab015b89a82950b9820b7db12447473a6d9ee\"" Feb 13 19:52:50.385131 containerd[1482]: time="2025-02-13T19:52:50.384836489Z" level=info msg="StartContainer for \"69a4b980e1c8153281bb9a9f963ab015b89a82950b9820b7db12447473a6d9ee\"" Feb 13 19:52:50.438544 systemd[1]: Started cri-containerd-69a4b980e1c8153281bb9a9f963ab015b89a82950b9820b7db12447473a6d9ee.scope - libcontainer container 69a4b980e1c8153281bb9a9f963ab015b89a82950b9820b7db12447473a6d9ee. Feb 13 19:52:50.476484 containerd[1482]: time="2025-02-13T19:52:50.476419076Z" level=info msg="StartContainer for \"69a4b980e1c8153281bb9a9f963ab015b89a82950b9820b7db12447473a6d9ee\" returns successfully" Feb 13 19:52:50.804887 kubelet[1855]: E0213 19:52:50.804819 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:51.405176 kubelet[1855]: I0213 19:52:51.405084 1855 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.135879341 podStartE2EDuration="18.40506056s" podCreationTimestamp="2025-02-13 19:52:33 +0000 UTC" firstStartedPulling="2025-02-13 19:52:50.090013773 +0000 UTC m=+52.101376396" lastFinishedPulling="2025-02-13 19:52:50.359194981 +0000 UTC m=+52.370557615" observedRunningTime="2025-02-13 19:52:51.404620751 +0000 UTC m=+53.415983397" watchObservedRunningTime="2025-02-13 19:52:51.40506056 +0000 UTC m=+53.416423206" Feb 13 19:52:51.590805 systemd-networkd[1395]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:52:51.805804 kubelet[1855]: E0213 19:52:51.805730 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:52.806243 kubelet[1855]: E0213 19:52:52.806161 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:53.807411 kubelet[1855]: E0213 19:52:53.807343 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:53.911742 ntpd[1453]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:52:53.912386 ntpd[1453]: 13 Feb 19:52:53 ntpd[1453]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Feb 13 19:52:54.808380 kubelet[1855]: E0213 19:52:54.808313 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:55.809440 kubelet[1855]: E0213 19:52:55.809358 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:56.810102 kubelet[1855]: E0213 19:52:56.810028 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:57.811339 kubelet[1855]: E0213 19:52:57.811257 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:58.760247 kubelet[1855]: E0213 19:52:58.760177 1855 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:58.802165 containerd[1482]: time="2025-02-13T19:52:58.802081475Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:58.802782 containerd[1482]: time="2025-02-13T19:52:58.802249952Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:58.802782 containerd[1482]: time="2025-02-13T19:52:58.802338106Z" level=info msg="StopPodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:58.803829 containerd[1482]: time="2025-02-13T19:52:58.803121135Z" level=info msg="RemovePodSandbox for \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:58.803829 containerd[1482]: time="2025-02-13T19:52:58.803163021Z" level=info msg="Forcibly stopping sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\"" Feb 13 19:52:58.803829 containerd[1482]: time="2025-02-13T19:52:58.803271544Z" level=info msg="TearDown network for sandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" successfully" Feb 13 19:52:58.808107 containerd[1482]: time="2025-02-13T19:52:58.808010194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.808107 containerd[1482]: time="2025-02-13T19:52:58.808099453Z" level=info msg="RemovePodSandbox \"b4f097af599f6a4feca654374fcaac12415244b8cf3fc532734a12fa2b9238c8\" returns successfully" Feb 13 19:52:58.808885 containerd[1482]: time="2025-02-13T19:52:58.808712396Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:58.809188 containerd[1482]: time="2025-02-13T19:52:58.808897990Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:58.809188 containerd[1482]: time="2025-02-13T19:52:58.808918909Z" level=info msg="StopPodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:58.809504 containerd[1482]: time="2025-02-13T19:52:58.809429965Z" level=info msg="RemovePodSandbox for \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:58.809591 containerd[1482]: time="2025-02-13T19:52:58.809525078Z" level=info msg="Forcibly stopping sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\"" Feb 13 19:52:58.809766 containerd[1482]: time="2025-02-13T19:52:58.809671840Z" level=info msg="TearDown network for sandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" successfully" Feb 13 19:52:58.811779 kubelet[1855]: E0213 19:52:58.811732 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:58.814197 containerd[1482]: time="2025-02-13T19:52:58.814137114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.814370 containerd[1482]: time="2025-02-13T19:52:58.814214974Z" level=info msg="RemovePodSandbox \"92559107c5eab20674210697ec211c3e63bc81c442dcffef7ec2d59a321415be\" returns successfully" Feb 13 19:52:58.814946 containerd[1482]: time="2025-02-13T19:52:58.814758594Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:58.814946 containerd[1482]: time="2025-02-13T19:52:58.814902301Z" level=info msg="TearDown network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" successfully" Feb 13 19:52:58.814946 containerd[1482]: time="2025-02-13T19:52:58.814923192Z" level=info msg="StopPodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" returns successfully" Feb 13 19:52:58.815562 containerd[1482]: time="2025-02-13T19:52:58.815521759Z" level=info msg="RemovePodSandbox for \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:58.815693 containerd[1482]: time="2025-02-13T19:52:58.815647144Z" level=info msg="Forcibly stopping sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\"" Feb 13 19:52:58.815864 containerd[1482]: time="2025-02-13T19:52:58.815788662Z" level=info msg="TearDown network for sandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" successfully" Feb 13 19:52:58.819521 containerd[1482]: time="2025-02-13T19:52:58.819455329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.819521 containerd[1482]: time="2025-02-13T19:52:58.819528888Z" level=info msg="RemovePodSandbox \"2c2fb23a67337a7f610e3b01538cd13e62e35172a4f5618243a1ef8896799506\" returns successfully" Feb 13 19:52:58.820016 containerd[1482]: time="2025-02-13T19:52:58.819986429Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" Feb 13 19:52:58.820176 containerd[1482]: time="2025-02-13T19:52:58.820112758Z" level=info msg="TearDown network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" successfully" Feb 13 19:52:58.820176 containerd[1482]: time="2025-02-13T19:52:58.820135663Z" level=info msg="StopPodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" returns successfully" Feb 13 19:52:58.820794 containerd[1482]: time="2025-02-13T19:52:58.820621743Z" level=info msg="RemovePodSandbox for \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" Feb 13 19:52:58.820794 containerd[1482]: time="2025-02-13T19:52:58.820655541Z" level=info msg="Forcibly stopping sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\"" Feb 13 19:52:58.821005 containerd[1482]: time="2025-02-13T19:52:58.820784892Z" level=info msg="TearDown network for sandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" successfully" Feb 13 19:52:58.824857 containerd[1482]: time="2025-02-13T19:52:58.824670363Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.824857 containerd[1482]: time="2025-02-13T19:52:58.824746237Z" level=info msg="RemovePodSandbox \"3a788f5389f046f318b4355c8e0377295df2eaa84083d0675ff9aa7e63e3e885\" returns successfully" Feb 13 19:52:58.825643 containerd[1482]: time="2025-02-13T19:52:58.825390651Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\"" Feb 13 19:52:58.825643 containerd[1482]: time="2025-02-13T19:52:58.825544796Z" level=info msg="TearDown network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" successfully" Feb 13 19:52:58.825643 containerd[1482]: time="2025-02-13T19:52:58.825564312Z" level=info msg="StopPodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" returns successfully" Feb 13 19:52:58.827070 containerd[1482]: time="2025-02-13T19:52:58.826367555Z" level=info msg="RemovePodSandbox for \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\"" Feb 13 19:52:58.827070 containerd[1482]: time="2025-02-13T19:52:58.826403309Z" level=info msg="Forcibly stopping sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\"" Feb 13 19:52:58.827070 containerd[1482]: time="2025-02-13T19:52:58.826516789Z" level=info msg="TearDown network for sandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" successfully" Feb 13 19:52:58.830568 containerd[1482]: time="2025-02-13T19:52:58.830490642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.830699 containerd[1482]: time="2025-02-13T19:52:58.830614716Z" level=info msg="RemovePodSandbox \"f352f984cb24556c39280a039882f2e3803cd6b890bcc223300bf71e456fb5b1\" returns successfully" Feb 13 19:52:58.831087 containerd[1482]: time="2025-02-13T19:52:58.831042322Z" level=info msg="StopPodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\"" Feb 13 19:52:58.831192 containerd[1482]: time="2025-02-13T19:52:58.831169902Z" level=info msg="TearDown network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" successfully" Feb 13 19:52:58.831259 containerd[1482]: time="2025-02-13T19:52:58.831204118Z" level=info msg="StopPodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" returns successfully" Feb 13 19:52:58.831727 containerd[1482]: time="2025-02-13T19:52:58.831674875Z" level=info msg="RemovePodSandbox for \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\"" Feb 13 19:52:58.831940 containerd[1482]: time="2025-02-13T19:52:58.831909459Z" level=info msg="Forcibly stopping sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\"" Feb 13 19:52:58.832080 containerd[1482]: time="2025-02-13T19:52:58.832021747Z" level=info msg="TearDown network for sandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" successfully" Feb 13 19:52:58.835739 containerd[1482]: time="2025-02-13T19:52:58.835678826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.835996 containerd[1482]: time="2025-02-13T19:52:58.835765227Z" level=info msg="RemovePodSandbox \"5864ad5a9b6ce4e78abe38ce39f77ba5cc9b0b93d77e9bab14511d4c261592d3\" returns successfully" Feb 13 19:52:58.836311 containerd[1482]: time="2025-02-13T19:52:58.836204870Z" level=info msg="StopPodSandbox for \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\"" Feb 13 19:52:58.836427 containerd[1482]: time="2025-02-13T19:52:58.836357374Z" level=info msg="TearDown network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\" successfully" Feb 13 19:52:58.836427 containerd[1482]: time="2025-02-13T19:52:58.836378310Z" level=info msg="StopPodSandbox for \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\" returns successfully" Feb 13 19:52:58.837011 containerd[1482]: time="2025-02-13T19:52:58.836911910Z" level=info msg="RemovePodSandbox for \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\"" Feb 13 19:52:58.837011 containerd[1482]: time="2025-02-13T19:52:58.836949120Z" level=info msg="Forcibly stopping sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\"" Feb 13 19:52:58.837366 containerd[1482]: time="2025-02-13T19:52:58.837058153Z" level=info msg="TearDown network for sandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\" successfully" Feb 13 19:52:58.840835 containerd[1482]: time="2025-02-13T19:52:58.840759840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.840835 containerd[1482]: time="2025-02-13T19:52:58.840833544Z" level=info msg="RemovePodSandbox \"79634b8e023af3c6566243de5207df12cc76c617fe6a6225e5fc74e660ba3a58\" returns successfully" Feb 13 19:52:58.841452 containerd[1482]: time="2025-02-13T19:52:58.841353515Z" level=info msg="StopPodSandbox for \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\"" Feb 13 19:52:58.841572 containerd[1482]: time="2025-02-13T19:52:58.841483564Z" level=info msg="TearDown network for sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\" successfully" Feb 13 19:52:58.841572 containerd[1482]: time="2025-02-13T19:52:58.841515756Z" level=info msg="StopPodSandbox for \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\" returns successfully" Feb 13 19:52:58.841993 containerd[1482]: time="2025-02-13T19:52:58.841966401Z" level=info msg="RemovePodSandbox for \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\"" Feb 13 19:52:58.842123 containerd[1482]: time="2025-02-13T19:52:58.842093218Z" level=info msg="Forcibly stopping sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\"" Feb 13 19:52:58.842296 containerd[1482]: time="2025-02-13T19:52:58.842215471Z" level=info msg="TearDown network for sandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\" successfully" Feb 13 19:52:58.846087 containerd[1482]: time="2025-02-13T19:52:58.846040548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.846087 containerd[1482]: time="2025-02-13T19:52:58.846109263Z" level=info msg="RemovePodSandbox \"e55d0270b32119f004c01f8573c57598a70aa319c925728638bad89be653d255\" returns successfully" Feb 13 19:52:58.846735 containerd[1482]: time="2025-02-13T19:52:58.846703878Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:58.846959 containerd[1482]: time="2025-02-13T19:52:58.846910489Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:58.846959 containerd[1482]: time="2025-02-13T19:52:58.846938273Z" level=info msg="StopPodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:58.847511 containerd[1482]: time="2025-02-13T19:52:58.847461996Z" level=info msg="RemovePodSandbox for \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:58.847511 containerd[1482]: time="2025-02-13T19:52:58.847508172Z" level=info msg="Forcibly stopping sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\"" Feb 13 19:52:58.847664 containerd[1482]: time="2025-02-13T19:52:58.847612444Z" level=info msg="TearDown network for sandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" successfully" Feb 13 19:52:58.851567 containerd[1482]: time="2025-02-13T19:52:58.851455939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.851567 containerd[1482]: time="2025-02-13T19:52:58.851537296Z" level=info msg="RemovePodSandbox \"38f16b74c33f98d1dd017bf10e4108abcab47445acb62e5777f46ce98430509e\" returns successfully" Feb 13 19:52:58.852045 containerd[1482]: time="2025-02-13T19:52:58.852005494Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:58.852331 containerd[1482]: time="2025-02-13T19:52:58.852300084Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:58.852477 containerd[1482]: time="2025-02-13T19:52:58.852331194Z" level=info msg="StopPodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:58.853022 containerd[1482]: time="2025-02-13T19:52:58.852834727Z" level=info msg="RemovePodSandbox for \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:58.853022 containerd[1482]: time="2025-02-13T19:52:58.852869995Z" level=info msg="Forcibly stopping sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\"" Feb 13 19:52:58.853022 containerd[1482]: time="2025-02-13T19:52:58.852979219Z" level=info msg="TearDown network for sandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" successfully" Feb 13 19:52:58.857011 containerd[1482]: time="2025-02-13T19:52:58.856919889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.857011 containerd[1482]: time="2025-02-13T19:52:58.856992782Z" level=info msg="RemovePodSandbox \"ef85aaa6b67201044737beed6fc9f92ced316e8489b11afa564102dccf0f8872\" returns successfully" Feb 13 19:52:58.857721 containerd[1482]: time="2025-02-13T19:52:58.857688506Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:58.857982 containerd[1482]: time="2025-02-13T19:52:58.857941567Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:58.857982 containerd[1482]: time="2025-02-13T19:52:58.857970344Z" level=info msg="StopPodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:58.858624 containerd[1482]: time="2025-02-13T19:52:58.858509982Z" level=info msg="RemovePodSandbox for \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:58.858624 containerd[1482]: time="2025-02-13T19:52:58.858559656Z" level=info msg="Forcibly stopping sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\"" Feb 13 19:52:58.858864 containerd[1482]: time="2025-02-13T19:52:58.858662850Z" level=info msg="TearDown network for sandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" successfully" Feb 13 19:52:58.862422 containerd[1482]: time="2025-02-13T19:52:58.862361444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.862632 containerd[1482]: time="2025-02-13T19:52:58.862436415Z" level=info msg="RemovePodSandbox \"a7bde046ef51923f8fcdcb1d47399f5c9eb68e6b1fb37b946bc2e3fa98341aa5\" returns successfully" Feb 13 19:52:58.863071 containerd[1482]: time="2025-02-13T19:52:58.862873320Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:58.863071 containerd[1482]: time="2025-02-13T19:52:58.863011423Z" level=info msg="TearDown network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" successfully" Feb 13 19:52:58.863071 containerd[1482]: time="2025-02-13T19:52:58.863032759Z" level=info msg="StopPodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" returns successfully" Feb 13 19:52:58.863480 containerd[1482]: time="2025-02-13T19:52:58.863409318Z" level=info msg="RemovePodSandbox for \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:58.863480 containerd[1482]: time="2025-02-13T19:52:58.863440482Z" level=info msg="Forcibly stopping sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\"" Feb 13 19:52:58.863601 containerd[1482]: time="2025-02-13T19:52:58.863549593Z" level=info msg="TearDown network for sandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" successfully" Feb 13 19:52:58.867148 containerd[1482]: time="2025-02-13T19:52:58.867103345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.867336 containerd[1482]: time="2025-02-13T19:52:58.867180493Z" level=info msg="RemovePodSandbox \"4c64aa6015949c368a8c323ac40bdd2fc74d833b4085c49acf4ab912026f53b6\" returns successfully" Feb 13 19:52:58.867763 containerd[1482]: time="2025-02-13T19:52:58.867620904Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" Feb 13 19:52:58.867870 containerd[1482]: time="2025-02-13T19:52:58.867761469Z" level=info msg="TearDown network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" successfully" Feb 13 19:52:58.867870 containerd[1482]: time="2025-02-13T19:52:58.867780593Z" level=info msg="StopPodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" returns successfully" Feb 13 19:52:58.868325 containerd[1482]: time="2025-02-13T19:52:58.868165868Z" level=info msg="RemovePodSandbox for \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" Feb 13 19:52:58.868325 containerd[1482]: time="2025-02-13T19:52:58.868199918Z" level=info msg="Forcibly stopping sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\"" Feb 13 19:52:58.868510 containerd[1482]: time="2025-02-13T19:52:58.868321662Z" level=info msg="TearDown network for sandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" successfully" Feb 13 19:52:58.871906 containerd[1482]: time="2025-02-13T19:52:58.871863058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.872012 containerd[1482]: time="2025-02-13T19:52:58.871962597Z" level=info msg="RemovePodSandbox \"dae031da4df21a7891cbdc9373b48e84c9d9152e8e707c43416f05fc6535f54d\" returns successfully" Feb 13 19:52:58.872508 containerd[1482]: time="2025-02-13T19:52:58.872470384Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\"" Feb 13 19:52:58.872804 containerd[1482]: time="2025-02-13T19:52:58.872764718Z" level=info msg="TearDown network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" successfully" Feb 13 19:52:58.872804 containerd[1482]: time="2025-02-13T19:52:58.872790160Z" level=info msg="StopPodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" returns successfully" Feb 13 19:52:58.873193 containerd[1482]: time="2025-02-13T19:52:58.873161489Z" level=info msg="RemovePodSandbox for \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\"" Feb 13 19:52:58.873312 containerd[1482]: time="2025-02-13T19:52:58.873198571Z" level=info msg="Forcibly stopping sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\"" Feb 13 19:52:58.873385 containerd[1482]: time="2025-02-13T19:52:58.873322984Z" level=info msg="TearDown network for sandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" successfully" Feb 13 19:52:58.877014 containerd[1482]: time="2025-02-13T19:52:58.876955228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.877259 containerd[1482]: time="2025-02-13T19:52:58.877028331Z" level=info msg="RemovePodSandbox \"ac987deff4a8078ee3b8291ea86505578687f55ddddc252a326ab2edfb313623\" returns successfully" Feb 13 19:52:58.877631 containerd[1482]: time="2025-02-13T19:52:58.877582559Z" level=info msg="StopPodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\"" Feb 13 19:52:58.877802 containerd[1482]: time="2025-02-13T19:52:58.877725014Z" level=info msg="TearDown network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" successfully" Feb 13 19:52:58.877802 containerd[1482]: time="2025-02-13T19:52:58.877744062Z" level=info msg="StopPodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" returns successfully" Feb 13 19:52:58.878184 containerd[1482]: time="2025-02-13T19:52:58.878155065Z" level=info msg="RemovePodSandbox for \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\"" Feb 13 19:52:58.878300 containerd[1482]: time="2025-02-13T19:52:58.878191260Z" level=info msg="Forcibly stopping sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\"" Feb 13 19:52:58.878380 containerd[1482]: time="2025-02-13T19:52:58.878316242Z" level=info msg="TearDown network for sandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" successfully" Feb 13 19:52:58.881825 containerd[1482]: time="2025-02-13T19:52:58.881763633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.881957 containerd[1482]: time="2025-02-13T19:52:58.881840035Z" level=info msg="RemovePodSandbox \"a7891b3e8faadca0886967b2ceee9e06f58a48e8accb7d00c0274c1d0cd0b2d2\" returns successfully" Feb 13 19:52:58.882355 containerd[1482]: time="2025-02-13T19:52:58.882296278Z" level=info msg="StopPodSandbox for \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\"" Feb 13 19:52:58.882456 containerd[1482]: time="2025-02-13T19:52:58.882426013Z" level=info msg="TearDown network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\" successfully" Feb 13 19:52:58.882456 containerd[1482]: time="2025-02-13T19:52:58.882445296Z" level=info msg="StopPodSandbox for \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\" returns successfully" Feb 13 19:52:58.883026 containerd[1482]: time="2025-02-13T19:52:58.882849181Z" level=info msg="RemovePodSandbox for \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\"" Feb 13 19:52:58.883026 containerd[1482]: time="2025-02-13T19:52:58.882885087Z" level=info msg="Forcibly stopping sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\"" Feb 13 19:52:58.883162 containerd[1482]: time="2025-02-13T19:52:58.882996510Z" level=info msg="TearDown network for sandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\" successfully" Feb 13 19:52:58.886568 containerd[1482]: time="2025-02-13T19:52:58.886509544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.886688 containerd[1482]: time="2025-02-13T19:52:58.886578888Z" level=info msg="RemovePodSandbox \"6f2312a9e2b6b838be9a30d83e2da64dcfc8d74df1bed7834089bb2d9d46d54d\" returns successfully" Feb 13 19:52:58.887112 containerd[1482]: time="2025-02-13T19:52:58.887077437Z" level=info msg="StopPodSandbox for \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\"" Feb 13 19:52:58.887315 containerd[1482]: time="2025-02-13T19:52:58.887211097Z" level=info msg="TearDown network for sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\" successfully" Feb 13 19:52:58.887315 containerd[1482]: time="2025-02-13T19:52:58.887236322Z" level=info msg="StopPodSandbox for \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\" returns successfully" Feb 13 19:52:58.887867 containerd[1482]: time="2025-02-13T19:52:58.887680932Z" level=info msg="RemovePodSandbox for \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\"" Feb 13 19:52:58.887867 containerd[1482]: time="2025-02-13T19:52:58.887718348Z" level=info msg="Forcibly stopping sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\"" Feb 13 19:52:58.887996 containerd[1482]: time="2025-02-13T19:52:58.887836970Z" level=info msg="TearDown network for sandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\" successfully" Feb 13 19:52:58.891650 containerd[1482]: time="2025-02-13T19:52:58.891489188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:52:58.891650 containerd[1482]: time="2025-02-13T19:52:58.891636917Z" level=info msg="RemovePodSandbox \"20f6bc41ba89c6f365c74d1d2e0cf7a4b089b03ca16a23e3fc74744ebbd8975a\" returns successfully" Feb 13 19:52:59.812488 kubelet[1855]: E0213 19:52:59.812418 1855 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"