Jan 29 16:33:53.158248 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 14:51:22 -00 2025 Jan 29 16:33:53.158295 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:33:53.158315 kernel: BIOS-provided physical RAM map: Jan 29 16:33:53.158330 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 29 16:33:53.158344 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 29 16:33:53.158358 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 29 16:33:53.158392 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 29 16:33:53.158407 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 29 16:33:53.158427 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd325fff] usable Jan 29 16:33:53.158441 kernel: BIOS-e820: [mem 0x00000000bd326000-0x00000000bd32dfff] ACPI data Jan 29 16:33:53.158457 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Jan 29 16:33:53.158472 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 29 16:33:53.158487 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 29 16:33:53.158502 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 29 16:33:53.158526 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 29 16:33:53.158543 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 29 16:33:53.158559 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 29 16:33:53.158576 kernel: NX (Execute Disable) protection: active Jan 29 16:33:53.158593 kernel: APIC: Static calls initialized Jan 29 16:33:53.158609 kernel: efi: EFI v2.7 by EDK II Jan 29 16:33:53.158626 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd326018 Jan 29 16:33:53.158643 kernel: random: crng init done Jan 29 16:33:53.158668 kernel: secureboot: Secure boot disabled Jan 29 16:33:53.158684 kernel: SMBIOS 2.4 present. Jan 29 16:33:53.158705 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 29 16:33:53.158721 kernel: Hypervisor detected: KVM Jan 29 16:33:53.158737 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 16:33:53.158753 kernel: kvm-clock: using sched offset of 13167098967 cycles Jan 29 16:33:53.158771 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 16:33:53.158788 kernel: tsc: Detected 2299.998 MHz processor Jan 29 16:33:53.158805 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 16:33:53.158822 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 16:33:53.158838 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 29 16:33:53.158855 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 29 16:33:53.158876 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 16:33:53.158893 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 29 16:33:53.158909 kernel: Using GB pages for direct mapping Jan 29 16:33:53.158926 kernel: ACPI: Early table checksum verification disabled Jan 29 16:33:53.158942 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 29 16:33:53.158960 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 29 16:33:53.158985 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 29 16:33:53.159006 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 29 16:33:53.159024 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 29 16:33:53.159042 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 29 16:33:53.159061 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 29 16:33:53.159080 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 29 16:33:53.159099 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 29 16:33:53.159116 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 29 16:33:53.159138 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 29 16:33:53.159155 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 29 16:33:53.159173 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 29 16:33:53.159192 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 29 16:33:53.159210 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 29 16:33:53.159228 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 29 16:33:53.159246 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 29 16:33:53.159263 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 29 16:33:53.159281 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 29 16:33:53.159303 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 29 16:33:53.159321 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 16:33:53.159339 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 16:33:53.159356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 16:33:53.159394 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 29 16:33:53.159412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 29 16:33:53.159431 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 29 16:33:53.159449 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 29 16:33:53.159468 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 29 16:33:53.159491 kernel: Zone ranges: Jan 29 16:33:53.159510 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 16:33:53.159528 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 16:33:53.159546 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 29 16:33:53.159564 kernel: Movable zone start for each node Jan 29 16:33:53.159583 kernel: Early memory node ranges Jan 29 16:33:53.159601 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 29 16:33:53.159619 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 29 16:33:53.159637 kernel: node 0: [mem 0x0000000000100000-0x00000000bd325fff] Jan 29 16:33:53.159727 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Jan 29 16:33:53.159746 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 29 16:33:53.159763 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 29 16:33:53.159782 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 29 16:33:53.159800 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 16:33:53.159818 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 29 16:33:53.159837 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 29 16:33:53.159855 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jan 29 16:33:53.159873 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 16:33:53.159895 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 29 16:33:53.159911 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 16:33:53.159928 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 16:33:53.159946 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 16:33:53.159964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 16:33:53.159982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 16:33:53.160000 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 16:33:53.160018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 16:33:53.160037 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 16:33:53.160059 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 16:33:53.160077 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 16:33:53.160095 kernel: Booting paravirtualized kernel on KVM Jan 29 16:33:53.160114 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 16:33:53.160132 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 16:33:53.160150 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 16:33:53.160168 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 16:33:53.160186 kernel: pcpu-alloc: [0] 0 1 Jan 29 16:33:53.160204 kernel: kvm-guest: PV spinlocks enabled Jan 29 16:33:53.160226 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 16:33:53.160247 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:33:53.160265 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:33:53.160283 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 16:33:53.160301 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:33:53.160319 kernel: Fallback order for Node 0: 0 Jan 29 16:33:53.160338 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Jan 29 16:33:53.160356 kernel: Policy zone: Normal Jan 29 16:33:53.162410 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:33:53.162440 kernel: software IO TLB: area num 2. Jan 29 16:33:53.162457 kernel: Memory: 7511316K/7860552K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43472K init, 1600K bss, 348980K reserved, 0K cma-reserved) Jan 29 16:33:53.162472 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:33:53.162491 kernel: Kernel/User page tables isolation: enabled Jan 29 16:33:53.162512 kernel: ftrace: allocating 37893 entries in 149 pages Jan 29 16:33:53.162533 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 16:33:53.162554 kernel: Dynamic Preempt: voluntary Jan 29 16:33:53.162605 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:33:53.162629 kernel: rcu: RCU event tracing is enabled. Jan 29 16:33:53.162664 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:33:53.162682 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:33:53.162705 kernel: Rude variant of Tasks RCU enabled. Jan 29 16:33:53.162724 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:33:53.162742 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:33:53.162761 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:33:53.162780 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 16:33:53.162802 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:33:53.162820 kernel: Console: colour dummy device 80x25 Jan 29 16:33:53.162839 kernel: printk: console [ttyS0] enabled Jan 29 16:33:53.162857 kernel: ACPI: Core revision 20230628 Jan 29 16:33:53.162875 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 16:33:53.162893 kernel: x2apic enabled Jan 29 16:33:53.162911 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 16:33:53.162930 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 29 16:33:53.162949 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 16:33:53.162972 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 29 16:33:53.162991 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 29 16:33:53.163009 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 29 16:33:53.163028 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 16:33:53.163046 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 16:33:53.163065 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 16:33:53.163083 kernel: Spectre V2 : Mitigation: IBRS Jan 29 16:33:53.163102 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 16:33:53.163120 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 16:33:53.163143 kernel: RETBleed: Mitigation: IBRS Jan 29 16:33:53.163168 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 16:33:53.163187 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 29 16:33:53.163205 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 16:33:53.163224 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 16:33:53.163243 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 16:33:53.163261 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 16:33:53.163280 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 16:33:53.163298 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 16:33:53.163320 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 16:33:53.163339 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 16:33:53.163357 kernel: Freeing SMP alternatives memory: 32K Jan 29 16:33:53.163392 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:33:53.163412 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:33:53.163430 kernel: landlock: Up and running. Jan 29 16:33:53.163448 kernel: SELinux: Initializing. Jan 29 16:33:53.163466 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:33:53.163485 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 16:33:53.163509 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 29 16:33:53.163528 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:33:53.163546 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:33:53.163565 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:33:53.163583 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 29 16:33:53.163601 kernel: signal: max sigframe size: 1776 Jan 29 16:33:53.163620 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:33:53.163639 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:33:53.163668 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 16:33:53.163687 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:33:53.163705 kernel: smpboot: x86: Booting SMP configuration: Jan 29 16:33:53.163724 kernel: .... node #0, CPUs: #1 Jan 29 16:33:53.163744 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 16:33:53.163764 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 16:33:53.163783 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:33:53.163801 kernel: smpboot: Max logical packages: 1 Jan 29 16:33:53.163820 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 29 16:33:53.163843 kernel: devtmpfs: initialized Jan 29 16:33:53.163861 kernel: x86/mm: Memory block size: 128MB Jan 29 16:33:53.163879 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 29 16:33:53.163898 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:33:53.163917 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:33:53.163935 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:33:53.163954 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:33:53.163972 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:33:53.163990 kernel: audit: type=2000 audit(1738168431.284:1): state=initialized audit_enabled=0 res=1 Jan 29 16:33:53.164012 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:33:53.164031 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 16:33:53.164049 kernel: cpuidle: using governor menu Jan 29 16:33:53.164068 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:33:53.164086 kernel: dca service started, version 1.12.1 Jan 29 16:33:53.164104 kernel: PCI: Using configuration type 1 for base access Jan 29 16:33:53.164123 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 16:33:53.164142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:33:53.164160 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:33:53.164183 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:33:53.164201 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:33:53.164219 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:33:53.164237 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:33:53.164256 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:33:53.164275 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:33:53.164293 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 16:33:53.164311 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 16:33:53.164329 kernel: ACPI: Interpreter enabled Jan 29 16:33:53.164352 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 16:33:53.164370 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 16:33:53.166412 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 16:33:53.166435 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 16:33:53.166453 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 16:33:53.166472 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:33:53.166753 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:33:53.166949 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 16:33:53.167137 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 16:33:53.167159 kernel: PCI host bridge to bus 0000:00 Jan 29 16:33:53.167334 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 16:33:53.167522 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 16:33:53.167693 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 16:33:53.167854 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 29 16:33:53.168023 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:33:53.168225 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 16:33:53.168504 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 29 16:33:53.168718 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 16:33:53.168899 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 16:33:53.169091 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 29 16:33:53.169280 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 29 16:33:53.169480 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 29 16:33:53.169678 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 16:33:53.169862 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 29 16:33:53.170044 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 29 16:33:53.170233 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 16:33:53.170429 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 29 16:33:53.170618 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 29 16:33:53.170644 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 16:33:53.170674 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 16:33:53.170695 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 16:33:53.170715 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 16:33:53.170735 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 16:33:53.170755 kernel: iommu: Default domain type: Translated Jan 29 16:33:53.170775 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 16:33:53.170795 kernel: efivars: Registered efivars operations Jan 29 16:33:53.170821 kernel: PCI: Using ACPI for IRQ routing Jan 29 16:33:53.170842 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 16:33:53.170862 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 29 16:33:53.170882 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 29 16:33:53.170900 kernel: e820: reserve RAM buffer [mem 0xbd326000-0xbfffffff] Jan 29 16:33:53.170920 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 29 16:33:53.170940 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 29 16:33:53.170960 kernel: vgaarb: loaded Jan 29 16:33:53.170980 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 16:33:53.171005 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:33:53.171035 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:33:53.171055 kernel: pnp: PnP ACPI init Jan 29 16:33:53.171075 kernel: pnp: PnP ACPI: found 7 devices Jan 29 16:33:53.171095 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 16:33:53.171115 kernel: NET: Registered PF_INET protocol family Jan 29 16:33:53.171135 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 16:33:53.171156 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 16:33:53.171177 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:33:53.171201 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:33:53.171221 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 16:33:53.171241 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 16:33:53.171261 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 16:33:53.171282 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 16:33:53.171302 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:33:53.171322 kernel: NET: Registered PF_XDP protocol family Jan 29 16:33:53.171530 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 16:33:53.171731 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 16:33:53.171919 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 16:33:53.172090 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 29 16:33:53.172281 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 16:33:53.172309 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:33:53.172329 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 16:33:53.172349 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 29 16:33:53.172369 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 16:33:53.172413 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 16:33:53.172432 kernel: clocksource: Switched to clocksource tsc Jan 29 16:33:53.172451 kernel: Initialise system trusted keyrings Jan 29 16:33:53.172467 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 16:33:53.172482 kernel: Key type asymmetric registered Jan 29 16:33:53.172498 kernel: Asymmetric key parser 'x509' registered Jan 29 16:33:53.172515 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 16:33:53.172532 kernel: io scheduler mq-deadline registered Jan 29 16:33:53.172550 kernel: io scheduler kyber registered Jan 29 16:33:53.172572 kernel: io scheduler bfq registered Jan 29 16:33:53.172590 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 16:33:53.172608 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 16:33:53.172826 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 29 16:33:53.172851 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 29 16:33:53.173040 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 29 16:33:53.173061 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 16:33:53.173257 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 29 16:33:53.173290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:33:53.173308 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 16:33:53.173324 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 16:33:53.173342 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 29 16:33:53.173360 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 29 16:33:53.173603 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 29 16:33:53.173629 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 16:33:53.173655 kernel: i8042: Warning: Keylock active Jan 29 16:33:53.173680 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 16:33:53.173698 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 16:33:53.173883 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 16:33:53.174051 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 16:33:53.174219 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T16:33:52 UTC (1738168432) Jan 29 16:33:53.174430 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 16:33:53.174456 kernel: intel_pstate: CPU model not supported Jan 29 16:33:53.174475 kernel: pstore: Using crash dump compression: deflate Jan 29 16:33:53.174500 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 16:33:53.174517 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:33:53.174533 kernel: Segment Routing with IPv6 Jan 29 16:33:53.174551 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:33:53.174567 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:33:53.174585 kernel: Key type dns_resolver registered Jan 29 16:33:53.174603 kernel: IPI shorthand broadcast: enabled Jan 29 16:33:53.174622 kernel: sched_clock: Marking stable (892005917, 193311889)->(1175057095, -89739289) Jan 29 16:33:53.174640 kernel: registered taskstats version 1 Jan 29 16:33:53.174673 kernel: Loading compiled-in X.509 certificates Jan 29 16:33:53.174691 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 68134fdf6dac3690da6e3bc9c22b042a5c364340' Jan 29 16:33:53.174710 kernel: Key type .fscrypt registered Jan 29 16:33:53.174729 kernel: Key type fscrypt-provisioning registered Jan 29 16:33:53.174749 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:33:53.174769 kernel: ima: No architecture policies found Jan 29 16:33:53.174789 kernel: clk: Disabling unused clocks Jan 29 16:33:53.174809 kernel: Freeing unused kernel image (initmem) memory: 43472K Jan 29 16:33:53.174828 kernel: Write protecting the kernel read-only data: 38912k Jan 29 16:33:53.174852 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Jan 29 16:33:53.174871 kernel: Run /init as init process Jan 29 16:33:53.174891 kernel: with arguments: Jan 29 16:33:53.174909 kernel: /init Jan 29 16:33:53.174928 kernel: with environment: Jan 29 16:33:53.174948 kernel: HOME=/ Jan 29 16:33:53.174967 kernel: TERM=linux Jan 29 16:33:53.174987 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:33:53.175006 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jan 29 16:33:53.175032 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:33:53.175058 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:33:53.175080 systemd[1]: Detected virtualization google. Jan 29 16:33:53.175100 systemd[1]: Detected architecture x86-64. Jan 29 16:33:53.175120 systemd[1]: Running in initrd. Jan 29 16:33:53.175140 systemd[1]: No hostname configured, using default hostname. Jan 29 16:33:53.175161 systemd[1]: Hostname set to . Jan 29 16:33:53.175185 systemd[1]: Initializing machine ID from random generator. Jan 29 16:33:53.175205 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:33:53.175225 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:33:53.175245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:33:53.175267 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:33:53.175287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:33:53.175307 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:33:53.175334 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:33:53.175372 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:33:53.175421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:33:53.175443 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:33:53.175464 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:33:53.175485 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:33:53.175510 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:33:53.175534 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:33:53.175555 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:33:53.175577 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:33:53.175598 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:33:53.175620 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:33:53.175642 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:33:53.175671 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:33:53.175696 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:33:53.175717 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:33:53.175739 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:33:53.175760 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:33:53.175781 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:33:53.175801 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:33:53.175822 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:33:53.175843 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:33:53.175864 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:33:53.175889 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:33:53.175910 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:33:53.175931 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:33:53.175953 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:33:53.176028 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 16:33:53.176075 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:33:53.176097 systemd-journald[184]: Journal started Jan 29 16:33:53.176142 systemd-journald[184]: Runtime Journal (/run/log/journal/8a214b226f6c4c94ae77e2575b402e0a) is 8M, max 148.6M, 140.6M free. Jan 29 16:33:53.178538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:33:53.131773 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 16:33:53.191619 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:33:53.191669 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:33:53.194661 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 16:33:53.201561 kernel: Bridge firewalling registered Jan 29 16:33:53.198822 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:33:53.205845 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:33:53.217612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:33:53.226654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:33:53.231859 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:33:53.247702 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:33:53.268538 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:33:53.274581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:33:53.279018 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:33:53.289934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:33:53.300662 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:33:53.310622 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:33:53.332817 dracut-cmdline[217]: dracut-dracut-053 Jan 29 16:33:53.337647 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=baa4132e9c604885344fa8e79d67c80ef841a135b233c762ecfe0386901a895d Jan 29 16:33:53.373622 systemd-resolved[218]: Positive Trust Anchors: Jan 29 16:33:53.373643 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:33:53.373717 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:33:53.381112 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 29 16:33:53.382923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:33:53.403696 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:33:53.449423 kernel: SCSI subsystem initialized Jan 29 16:33:53.461444 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:33:53.473424 kernel: iscsi: registered transport (tcp) Jan 29 16:33:53.496738 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:33:53.496828 kernel: QLogic iSCSI HBA Driver Jan 29 16:33:53.550936 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:33:53.557638 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:33:53.598067 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:33:53.598159 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:33:53.598188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:33:53.644439 kernel: raid6: avx2x4 gen() 18124 MB/s Jan 29 16:33:53.661418 kernel: raid6: avx2x2 gen() 18165 MB/s Jan 29 16:33:53.687410 kernel: raid6: avx2x1 gen() 13832 MB/s Jan 29 16:33:53.687473 kernel: raid6: using algorithm avx2x2 gen() 18165 MB/s Jan 29 16:33:53.714509 kernel: raid6: .... xor() 18700 MB/s, rmw enabled Jan 29 16:33:53.714603 kernel: raid6: using avx2x2 recovery algorithm Jan 29 16:33:53.743418 kernel: xor: automatically using best checksumming function avx Jan 29 16:33:53.913423 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:33:53.927269 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:33:53.951679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:33:54.002633 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 29 16:33:54.011045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:33:54.041654 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:33:54.083077 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Jan 29 16:33:54.121691 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:33:54.126687 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:33:54.239106 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:33:54.270630 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:33:54.321417 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:33:54.333709 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:33:54.361403 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 16:33:54.373935 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:33:54.452546 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:33:54.452853 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 29 16:33:54.401282 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:33:54.446649 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:33:54.489982 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 16:33:54.490059 kernel: AES CTR mode by8 optimization enabled Jan 29 16:33:54.508953 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:33:54.546675 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 29 16:33:54.592907 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 29 16:33:54.593214 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 29 16:33:54.593482 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 29 16:33:54.593710 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:33:54.593945 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:33:54.593973 kernel: GPT:17805311 != 25165823 Jan 29 16:33:54.593998 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:33:54.594021 kernel: GPT:17805311 != 25165823 Jan 29 16:33:54.594052 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:33:54.594075 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:33:54.594100 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 29 16:33:54.521206 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:33:54.595516 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:33:54.608026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:33:54.664539 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (447) Jan 29 16:33:54.608296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:33:54.686713 kernel: BTRFS: device fsid b756ea5d-2d08-456f-8231-a684aa2555c3 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (466) Jan 29 16:33:54.631855 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:33:54.702877 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:33:54.727957 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:33:54.749077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:33:54.763768 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 29 16:33:54.792391 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 16:33:54.832923 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 29 16:33:54.852127 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 29 16:33:54.863679 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 29 16:33:54.894706 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:33:54.930821 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:33:54.944083 disk-uuid[544]: Primary Header is updated. Jan 29 16:33:54.944083 disk-uuid[544]: Secondary Entries is updated. Jan 29 16:33:54.944083 disk-uuid[544]: Secondary Header is updated. Jan 29 16:33:54.966410 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:33:54.969803 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:33:55.998407 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:33:55.998504 disk-uuid[545]: The operation has completed successfully. Jan 29 16:33:56.079579 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:33:56.079731 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:33:56.158686 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:33:56.177651 sh[567]: Success Jan 29 16:33:56.190610 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 16:33:56.289100 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:33:56.296787 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:33:56.325080 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:33:56.378781 kernel: BTRFS info (device dm-0): first mount of filesystem b756ea5d-2d08-456f-8231-a684aa2555c3 Jan 29 16:33:56.378897 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:33:56.378924 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:33:56.388232 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:33:56.395159 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:33:56.428449 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:33:56.434085 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:33:56.435085 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:33:56.444583 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:33:56.480770 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:33:56.514547 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:33:56.530930 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:33:56.531019 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:33:56.549656 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:33:56.549770 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:33:56.566310 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:33:56.583588 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:33:56.597211 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:33:56.625678 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:33:56.646740 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:33:56.682750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:33:56.792143 systemd-networkd[751]: lo: Link UP Jan 29 16:33:56.792158 systemd-networkd[751]: lo: Gained carrier Jan 29 16:33:56.795341 systemd-networkd[751]: Enumeration completed Jan 29 16:33:56.805285 ignition[731]: Ignition 2.20.0 Jan 29 16:33:56.795939 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:33:56.805296 ignition[731]: Stage: fetch-offline Jan 29 16:33:56.798448 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:33:56.805367 ignition[731]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:33:56.798469 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:33:56.805377 ignition[731]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:33:56.802947 systemd-networkd[751]: eth0: Link UP Jan 29 16:33:56.805696 ignition[731]: parsed url from cmdline: "" Jan 29 16:33:56.802956 systemd-networkd[751]: eth0: Gained carrier Jan 29 16:33:56.805704 ignition[731]: no config URL provided Jan 29 16:33:56.802976 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:33:56.805721 ignition[731]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:33:56.813584 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.87/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 16:33:56.805883 ignition[731]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:33:56.826080 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:33:56.805899 ignition[731]: failed to fetch config: resource requires networking Jan 29 16:33:56.851427 systemd[1]: Reached target network.target - Network. Jan 29 16:33:56.806179 ignition[731]: Ignition finished successfully Jan 29 16:33:56.874704 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:33:56.917025 ignition[761]: Ignition 2.20.0 Jan 29 16:33:56.925221 unknown[761]: fetched base config from "system" Jan 29 16:33:56.917037 ignition[761]: Stage: fetch Jan 29 16:33:56.925236 unknown[761]: fetched base config from "system" Jan 29 16:33:56.917256 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:33:56.925247 unknown[761]: fetched user config from "gcp" Jan 29 16:33:56.917269 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:33:56.927495 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:33:56.917452 ignition[761]: parsed url from cmdline: "" Jan 29 16:33:56.945703 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:33:56.917464 ignition[761]: no config URL provided Jan 29 16:33:56.981988 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:33:56.917473 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:33:57.008615 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:33:56.917489 ignition[761]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:33:57.051997 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:33:56.917530 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 29 16:33:57.071003 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:33:56.921604 ignition[761]: GET result: OK Jan 29 16:33:57.087631 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:33:56.921702 ignition[761]: parsing config with SHA512: 7af191af9aaa8f8601a9d4da73f943a8c481cf11c44e92bb975815190b10839b30f1253968716fd775488853409f68d0b284317c570b2187f09dbc9b6de45970 Jan 29 16:33:57.105639 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:33:56.925585 ignition[761]: fetch: fetch complete Jan 29 16:33:57.119638 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:33:56.925593 ignition[761]: fetch: fetch passed Jan 29 16:33:57.134603 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:33:56.925685 ignition[761]: Ignition finished successfully Jan 29 16:33:57.154790 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:33:56.968956 ignition[767]: Ignition 2.20.0 Jan 29 16:33:56.968967 ignition[767]: Stage: kargs Jan 29 16:33:56.969175 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:33:56.969191 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:33:56.970022 ignition[767]: kargs: kargs passed Jan 29 16:33:56.970078 ignition[767]: Ignition finished successfully Jan 29 16:33:57.033141 ignition[772]: Ignition 2.20.0 Jan 29 16:33:57.033156 ignition[772]: Stage: disks Jan 29 16:33:57.033434 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:33:57.033455 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:33:57.034342 ignition[772]: disks: disks passed Jan 29 16:33:57.034431 ignition[772]: Ignition finished successfully Jan 29 16:33:57.212482 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:33:57.393508 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:33:57.398519 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:33:57.535400 kernel: EXT4-fs (sda9): mounted filesystem 93ea9bb6-d6ba-4a18-a828-f0002683a7b4 r/w with ordered data mode. Quota mode: none. Jan 29 16:33:57.536494 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:33:57.537392 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:33:57.571631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:33:57.586587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:33:57.605463 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (790) Jan 29 16:33:57.626046 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:33:57.626149 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:33:57.626177 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:33:57.635947 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 16:33:57.636046 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:33:57.690585 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:33:57.690640 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:33:57.636093 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:33:57.674803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:33:57.698936 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:33:57.714644 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:33:57.841785 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:33:57.852538 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:33:57.862615 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:33:57.872570 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:33:58.018423 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:33:58.022635 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:33:58.062431 kernel: BTRFS info (device sda6): last unmount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:33:58.068718 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:33:58.078856 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:33:58.125406 ignition[902]: INFO : Ignition 2.20.0 Jan 29 16:33:58.125406 ignition[902]: INFO : Stage: mount Jan 29 16:33:58.125406 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:33:58.125406 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:33:58.176568 ignition[902]: INFO : mount: mount passed Jan 29 16:33:58.176568 ignition[902]: INFO : Ignition finished successfully Jan 29 16:33:58.128251 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:33:58.142191 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:33:58.164528 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:33:58.297635 systemd-networkd[751]: eth0: Gained IPv6LL Jan 29 16:33:58.373725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:33:58.398419 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (916) Jan 29 16:33:58.416199 kernel: BTRFS info (device sda6): first mount of filesystem 69adaa96-08ce-46f2-b4e9-2d5873de430e Jan 29 16:33:58.416294 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 16:33:58.416322 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:33:58.439131 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:33:58.439284 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:33:58.442694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:33:58.483198 ignition[933]: INFO : Ignition 2.20.0 Jan 29 16:33:58.483198 ignition[933]: INFO : Stage: files Jan 29 16:33:58.497547 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:33:58.497547 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:33:58.497547 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:33:58.497547 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:33:58.497547 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:33:58.497547 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:33:58.497547 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:33:58.497547 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:33:58.495826 unknown[933]: wrote ssh authorized keys file for user: core Jan 29 16:33:58.598593 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:33:58.598593 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:33:58.598593 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:33:58.598593 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:33:58.598593 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:33:58.598593 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:33:58.598593 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:33:58.598593 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 16:34:06.305753 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 16:34:06.681946 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 16:34:06.700668 ignition[933]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:34:06.700668 ignition[933]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:34:06.700668 ignition[933]: INFO : files: files passed Jan 29 16:34:06.700668 ignition[933]: INFO : Ignition finished successfully Jan 29 16:34:06.684505 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:34:06.706773 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:34:06.731663 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:34:06.773977 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:34:06.774099 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:34:06.834722 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:34:06.834722 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:34:06.825172 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:34:06.901629 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:34:06.846967 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:34:06.865665 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:34:06.947293 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:34:06.947479 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:34:06.966449 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:34:06.976813 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:34:07.010721 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:34:07.016743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:34:07.067064 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:34:07.074625 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:34:07.118361 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:34:07.137784 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:34:07.138229 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:34:07.157976 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:34:07.158177 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:34:07.202659 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:34:07.203079 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:34:07.220000 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:34:07.236070 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:34:07.253989 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:34:07.284916 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:34:07.304885 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:34:07.335784 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:34:07.336211 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:34:07.352984 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:34:07.369993 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:34:07.370271 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:34:07.400980 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:34:07.410971 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:34:07.428949 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:34:07.429136 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:34:07.448947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:34:07.449219 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:34:07.505672 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:34:07.506149 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:34:07.516018 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:34:07.516215 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:34:07.553726 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:34:07.583092 ignition[986]: INFO : Ignition 2.20.0 Jan 29 16:34:07.583092 ignition[986]: INFO : Stage: umount Jan 29 16:34:07.606568 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:34:07.606568 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 16:34:07.606568 ignition[986]: INFO : umount: umount passed Jan 29 16:34:07.606568 ignition[986]: INFO : Ignition finished successfully Jan 29 16:34:07.589796 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:34:07.599612 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:34:07.599892 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:34:07.638033 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:34:07.638236 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:34:07.665941 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:34:07.667290 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:34:07.667460 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:34:07.682221 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:34:07.682348 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:34:07.705201 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:34:07.705333 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:34:07.714209 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:34:07.714277 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:34:07.746866 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:34:07.746958 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:34:07.764852 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:34:07.764945 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:34:07.772926 systemd[1]: Stopped target network.target - Network. Jan 29 16:34:07.797601 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:34:07.797914 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:34:07.805853 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:34:07.822855 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:34:07.826523 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:34:07.837788 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:34:07.873567 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:34:07.888693 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:34:07.889009 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:34:07.897828 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:34:07.897884 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:34:07.931799 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:34:07.931887 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:34:07.940875 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:34:07.940954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:34:07.965773 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:34:07.965877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:34:07.976174 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:34:08.003799 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:34:08.004245 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:34:08.004406 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:34:08.031585 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:34:08.032006 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:34:08.032155 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:34:08.054091 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:34:08.055421 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:34:08.055512 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:34:08.083605 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:34:08.102518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:34:08.102778 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:34:08.121750 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:34:08.121849 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:34:08.139854 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:34:08.139929 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:34:08.164770 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:34:08.164863 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:34:08.183931 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:34:08.594583 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 16:34:08.210941 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:34:08.211041 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:34:08.211587 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:34:08.211760 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:34:08.238826 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:34:08.238898 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:34:08.250658 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:34:08.250762 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:34:08.260631 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:34:08.260774 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:34:08.280609 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:34:08.280767 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:34:08.311616 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:34:08.311746 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:34:08.347726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:34:08.351773 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:34:08.351885 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:34:08.368924 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 16:34:08.368997 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:34:08.399786 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:34:08.399874 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:34:08.421768 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:34:08.421858 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:34:08.442296 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 16:34:08.442442 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:34:08.443052 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:34:08.443197 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:34:08.461049 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:34:08.461169 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:34:08.472499 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:34:08.494675 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:34:08.538498 systemd[1]: Switching root. Jan 29 16:34:08.927610 systemd-journald[184]: Journal stopped Jan 29 16:34:11.409634 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:34:11.409702 kernel: SELinux: policy capability open_perms=1 Jan 29 16:34:11.409724 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:34:11.409742 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:34:11.409760 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:34:11.409776 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:34:11.409796 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:34:11.409814 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:34:11.409837 kernel: audit: type=1403 audit(1738168449.084:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:34:11.409861 systemd[1]: Successfully loaded SELinux policy in 93.385ms. Jan 29 16:34:11.409882 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.184ms. Jan 29 16:34:11.409904 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:34:11.409924 systemd[1]: Detected virtualization google. Jan 29 16:34:11.409943 systemd[1]: Detected architecture x86-64. Jan 29 16:34:11.409969 systemd[1]: Detected first boot. Jan 29 16:34:11.409990 systemd[1]: Initializing machine ID from random generator. Jan 29 16:34:11.410010 zram_generator::config[1030]: No configuration found. Jan 29 16:34:11.410031 kernel: Guest personality initialized and is inactive Jan 29 16:34:11.410053 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 16:34:11.410076 kernel: Initialized host personality Jan 29 16:34:11.410095 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:34:11.410115 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:34:11.410137 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:34:11.410159 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:34:11.410180 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:34:11.410199 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:34:11.410220 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:34:11.410242 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:34:11.410267 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:34:11.410288 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:34:11.410309 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:34:11.410332 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:34:11.410352 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:34:11.410372 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:34:11.411983 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:34:11.412018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:34:11.412141 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:34:11.412170 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:34:11.412194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:34:11.412218 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:34:11.412250 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 16:34:11.412274 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:34:11.412297 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:34:11.412325 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:34:11.412347 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:34:11.412368 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:34:11.412424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:34:11.412449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:34:11.412472 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:34:11.412494 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:34:11.412517 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:34:11.412554 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:34:11.412578 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:34:11.412599 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:34:11.412619 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:34:11.412646 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:34:11.412670 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:34:11.412695 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:34:11.412718 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:34:11.412741 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:34:11.412765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:11.412788 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:34:11.412810 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:34:11.412840 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:34:11.412864 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:34:11.412889 systemd[1]: Reached target machines.target - Containers. Jan 29 16:34:11.412913 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:34:11.412937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:34:11.412959 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:34:11.412983 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:34:11.413008 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:34:11.413031 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:34:11.413060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:34:11.413083 kernel: ACPI: bus type drm_connector registered Jan 29 16:34:11.413106 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:34:11.413129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:34:11.413152 kernel: fuse: init (API version 7.39) Jan 29 16:34:11.413172 kernel: loop: module loaded Jan 29 16:34:11.413195 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:34:11.413224 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:34:11.413247 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:34:11.413270 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:34:11.413293 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:34:11.413317 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:34:11.413341 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:34:11.413364 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:34:11.413457 systemd-journald[1118]: Collecting audit messages is disabled. Jan 29 16:34:11.413507 systemd-journald[1118]: Journal started Jan 29 16:34:11.413556 systemd-journald[1118]: Runtime Journal (/run/log/journal/efcbe4024b034e25b01f5e9c8a5b1e8d) is 8M, max 148.6M, 140.6M free. Jan 29 16:34:11.416433 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:34:10.128247 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:34:10.142309 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:34:10.142886 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:34:11.456424 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:34:11.483448 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:34:11.513426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:34:11.535793 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:34:11.535909 systemd[1]: Stopped verity-setup.service. Jan 29 16:34:11.564434 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:11.574424 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:34:11.585127 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:34:11.595871 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:34:11.606929 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:34:11.616854 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:34:11.626821 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:34:11.636857 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:34:11.647031 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:34:11.659037 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:34:11.671023 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:34:11.671431 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:34:11.683011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:34:11.683315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:34:11.695014 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:34:11.695312 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:34:11.706006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:34:11.706307 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:34:11.718014 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:34:11.718401 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:34:11.729022 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:34:11.729396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:34:11.740121 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:34:11.751024 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:34:11.762979 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:34:11.775048 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:34:11.787050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:34:11.811923 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:34:11.828602 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:34:11.851599 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:34:11.861614 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:34:11.861691 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:34:11.863412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:34:11.886670 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:34:11.905836 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:34:11.915847 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:34:11.927031 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:34:11.944431 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:34:11.955777 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:34:11.963659 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:34:11.974248 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:34:11.983466 systemd-journald[1118]: Time spent on flushing to /var/log/journal/efcbe4024b034e25b01f5e9c8a5b1e8d is 55.937ms for 929 entries. Jan 29 16:34:11.983466 systemd-journald[1118]: System Journal (/var/log/journal/efcbe4024b034e25b01f5e9c8a5b1e8d) is 8M, max 584.8M, 576.8M free. Jan 29 16:34:12.071043 systemd-journald[1118]: Received client request to flush runtime journal. Jan 29 16:34:11.992799 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:34:12.010979 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:34:12.032549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:34:12.051180 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:34:12.068481 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:34:12.083602 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:34:12.093046 kernel: loop0: detected capacity change from 0 to 147912 Jan 29 16:34:12.102309 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:34:12.115199 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:34:12.129138 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:34:12.141117 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:34:12.167842 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:34:12.183414 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:34:12.196641 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:34:12.215296 udevadm[1156]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:34:12.228003 kernel: loop1: detected capacity change from 0 to 138176 Jan 29 16:34:12.238235 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 29 16:34:12.238273 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 29 16:34:12.253106 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:34:12.255079 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:34:12.267144 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:34:12.291594 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:34:12.318403 kernel: loop2: detected capacity change from 0 to 205544 Jan 29 16:34:12.401118 kernel: loop3: detected capacity change from 0 to 52152 Jan 29 16:34:12.420439 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:34:12.448184 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:34:12.491100 kernel: loop4: detected capacity change from 0 to 147912 Jan 29 16:34:12.500479 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 29 16:34:12.500568 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 29 16:34:12.518400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:34:12.549424 kernel: loop5: detected capacity change from 0 to 138176 Jan 29 16:34:12.618416 kernel: loop6: detected capacity change from 0 to 205544 Jan 29 16:34:12.658535 kernel: loop7: detected capacity change from 0 to 52152 Jan 29 16:34:12.692308 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 29 16:34:12.694862 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 29 16:34:12.704871 systemd[1]: Reload requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:34:12.705071 systemd[1]: Reloading... Jan 29 16:34:12.906428 zram_generator::config[1214]: No configuration found. Jan 29 16:34:13.192954 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:34:13.248406 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:34:13.356629 systemd[1]: Reloading finished in 650 ms. Jan 29 16:34:13.372854 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:34:13.384227 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:34:13.410673 systemd[1]: Starting ensure-sysext.service... Jan 29 16:34:13.429709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:34:13.466515 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:34:13.466544 systemd[1]: Reloading... Jan 29 16:34:13.476167 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:34:13.476715 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:34:13.483747 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:34:13.486864 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 29 16:34:13.487019 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 29 16:34:13.494127 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:34:13.494149 systemd-tmpfiles[1250]: Skipping /boot Jan 29 16:34:13.519062 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:34:13.519097 systemd-tmpfiles[1250]: Skipping /boot Jan 29 16:34:13.639412 zram_generator::config[1279]: No configuration found. Jan 29 16:34:13.776494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:34:13.869235 systemd[1]: Reloading finished in 401 ms. Jan 29 16:34:13.883955 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:34:13.920429 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:34:13.944814 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:34:13.959806 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:34:13.981555 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:34:14.002273 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:34:14.019369 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:34:14.040213 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:34:14.054302 augenrules[1346]: No rules Jan 29 16:34:14.056782 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:34:14.057174 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:34:14.072962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:14.074651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:34:14.083804 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:34:14.101272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:34:14.120882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:34:14.131018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:34:14.131286 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:34:14.131948 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Jan 29 16:34:14.141212 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:34:14.151542 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:14.160923 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:34:14.173673 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:34:14.186325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:34:14.186647 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:34:14.198323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:34:14.198661 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:34:14.210016 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:34:14.223496 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:34:14.223850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:34:14.244313 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:34:14.281594 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:34:14.364766 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 16:34:14.371048 systemd[1]: Finished ensure-sysext.service. Jan 29 16:34:14.381497 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:14.392616 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:34:14.402351 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:34:14.410616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:34:14.431676 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:34:14.451614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:34:14.459853 systemd-resolved[1337]: Positive Trust Anchors: Jan 29 16:34:14.459874 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:34:14.459940 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:34:14.470305 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:34:14.484002 systemd-resolved[1337]: Defaulting to hostname 'linux'. Jan 29 16:34:14.488696 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 16:34:14.497688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:34:14.497776 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:34:14.515679 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:34:14.525584 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:34:14.527499 augenrules[1390]: /sbin/augenrules: No change Jan 29 16:34:14.543623 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:34:14.551911 augenrules[1419]: No rules Jan 29 16:34:14.553567 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:34:14.553633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 16:34:14.554938 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:34:14.566107 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:34:14.567242 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:34:14.579238 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:34:14.580129 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:34:14.598124 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 16:34:14.597665 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:34:14.597990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:34:14.608261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:34:14.608634 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:34:14.622450 kernel: ACPI: button: Power Button [PWRF] Jan 29 16:34:14.635403 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1385) Jan 29 16:34:14.641856 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:34:14.650845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:34:14.651465 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 29 16:34:14.673371 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:34:14.685480 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 16:34:14.697405 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 16:34:14.731447 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 29 16:34:14.784632 kernel: EDAC MC: Ver: 3.0.0 Jan 29 16:34:14.784680 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 29 16:34:14.731102 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Jan 29 16:34:14.815755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 16:34:14.858444 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:34:14.875416 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:34:14.876663 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Jan 29 16:34:14.901950 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 29 16:34:14.920316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:34:14.930499 systemd-networkd[1408]: lo: Link UP Jan 29 16:34:14.930518 systemd-networkd[1408]: lo: Gained carrier Jan 29 16:34:14.931546 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:34:14.931872 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:34:14.937346 systemd-networkd[1408]: Enumeration completed Jan 29 16:34:14.944730 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:14.944745 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:34:14.945597 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:34:14.946182 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:34:14.946812 systemd[1]: Reached target network.target - Network. Jan 29 16:34:14.947769 systemd-networkd[1408]: eth0: Link UP Jan 29 16:34:14.947789 systemd-networkd[1408]: eth0: Gained carrier Jan 29 16:34:14.947821 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:34:14.956991 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:34:14.960497 systemd-networkd[1408]: eth0: DHCPv4 address 10.128.0.87/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 16:34:14.995116 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:34:15.011860 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 29 16:34:15.012754 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:34:15.028028 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:34:15.035708 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:34:15.069807 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:34:15.070472 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:34:15.105131 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:34:15.106265 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:34:15.112806 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:34:15.125502 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:34:15.134978 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:34:15.147497 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:34:15.157761 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:34:15.169672 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:34:15.180832 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:34:15.190818 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:34:15.202571 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:34:15.213584 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:34:15.213655 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:34:15.222588 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:34:15.233949 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:34:15.246871 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:34:15.259400 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:34:15.270829 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:34:15.282602 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:34:15.303419 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:34:15.314296 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:34:15.326803 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:34:15.338959 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:34:15.349423 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:34:15.359555 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:34:15.368713 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:34:15.368776 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:34:15.374559 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:34:15.397421 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:34:15.417758 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:34:15.460289 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:34:15.470506 jq[1473]: false Jan 29 16:34:15.480780 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:34:15.491596 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:34:15.495171 coreos-metadata[1471]: Jan 29 16:34:15.494 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 29 16:34:15.501805 coreos-metadata[1471]: Jan 29 16:34:15.497 INFO Fetch successful Jan 29 16:34:15.501805 coreos-metadata[1471]: Jan 29 16:34:15.497 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 29 16:34:15.501805 coreos-metadata[1471]: Jan 29 16:34:15.497 INFO Fetch successful Jan 29 16:34:15.501805 coreos-metadata[1471]: Jan 29 16:34:15.497 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 29 16:34:15.501805 coreos-metadata[1471]: Jan 29 16:34:15.498 INFO Fetch successful Jan 29 16:34:15.501805 coreos-metadata[1471]: Jan 29 16:34:15.498 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 29 16:34:15.500686 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:34:15.503827 coreos-metadata[1471]: Jan 29 16:34:15.502 INFO Fetch successful Jan 29 16:34:15.519324 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 16:34:15.529203 extend-filesystems[1476]: Found loop4 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found loop5 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found loop6 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found loop7 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found sda Jan 29 16:34:15.555604 extend-filesystems[1476]: Found sda1 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found sda2 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found sda3 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found usr Jan 29 16:34:15.555604 extend-filesystems[1476]: Found sda4 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found sda6 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found sda7 Jan 29 16:34:15.555604 extend-filesystems[1476]: Found sda9 Jan 29 16:34:15.555604 extend-filesystems[1476]: Checking size of /dev/sda9 Jan 29 16:34:15.723603 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 29 16:34:15.723667 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 29 16:34:15.723703 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1357) Jan 29 16:34:15.535984 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:34:15.723991 extend-filesystems[1476]: Resized partition /dev/sda9 Jan 29 16:34:15.571256 dbus-daemon[1472]: [system] SELinux support is enabled Jan 29 16:34:15.556884 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:34:15.733446 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:34:15.733446 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:34:15.733446 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 29 16:34:15.733446 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 29 16:34:15.577707 dbus-daemon[1472]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1408 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 16:34:15.626675 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:34:15.787114 extend-filesystems[1476]: Resized filesystem in /dev/sda9 Jan 29 16:34:15.694371 ntpd[1479]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:41 UTC 2025 (1): Starting Jan 29 16:34:15.650322 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 14:24:41 UTC 2025 (1): Starting Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: ---------------------------------------------------- Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: corporation. Support and training for ntp-4 are Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: available at https://www.nwtime.org/support Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: ---------------------------------------------------- Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: proto: precision = 0.074 usec (-24) Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: basedate set to 2025-01-17 Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: gps base set to 2025-01-19 (week 2350) Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: Listen normally on 3 eth0 10.128.0.87:123 Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: Listen normally on 4 lo [::1]:123 Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: bind(21) AF_INET6 fe80::4001:aff:fe80:57%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:57%2#123 Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: failed to init interface for address fe80::4001:aff:fe80:57%2 Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: Listening on routing socket on fd #21 for interface updates Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:34:15.831192 ntpd[1479]: 29 Jan 16:34:15 ntpd[1479]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:34:15.694432 ntpd[1479]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 16:34:15.653259 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:34:15.694447 ntpd[1479]: ---------------------------------------------------- Jan 29 16:34:15.663776 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:34:15.836168 update_engine[1500]: I20250129 16:34:15.773906 1500 main.cc:92] Flatcar Update Engine starting Jan 29 16:34:15.836168 update_engine[1500]: I20250129 16:34:15.782949 1500 update_check_scheduler.cc:74] Next update check in 3m52s Jan 29 16:34:15.694461 ntpd[1479]: ntp-4 is maintained by Network Time Foundation, Jan 29 16:34:15.700011 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:34:15.836775 jq[1503]: true Jan 29 16:34:15.694475 ntpd[1479]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 16:34:15.715850 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:34:15.694489 ntpd[1479]: corporation. Support and training for ntp-4 are Jan 29 16:34:15.766048 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:34:15.694503 ntpd[1479]: available at https://www.nwtime.org/support Jan 29 16:34:15.766465 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:34:15.694518 ntpd[1479]: ---------------------------------------------------- Jan 29 16:34:15.766955 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:34:15.699202 ntpd[1479]: proto: precision = 0.074 usec (-24) Jan 29 16:34:15.767253 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:34:15.700699 ntpd[1479]: basedate set to 2025-01-17 Jan 29 16:34:15.797313 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:34:15.700732 ntpd[1479]: gps base set to 2025-01-19 (week 2350) Jan 29 16:34:15.798461 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:34:15.705342 ntpd[1479]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 16:34:15.809086 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:34:15.707460 ntpd[1479]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 16:34:15.809495 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:34:15.707703 ntpd[1479]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 16:34:15.707883 ntpd[1479]: Listen normally on 3 eth0 10.128.0.87:123 Jan 29 16:34:15.707963 ntpd[1479]: Listen normally on 4 lo [::1]:123 Jan 29 16:34:15.708024 ntpd[1479]: bind(21) AF_INET6 fe80::4001:aff:fe80:57%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 16:34:15.708059 ntpd[1479]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:57%2#123 Jan 29 16:34:15.708081 ntpd[1479]: failed to init interface for address fe80::4001:aff:fe80:57%2 Jan 29 16:34:15.708125 ntpd[1479]: Listening on routing socket on fd #21 for interface updates Jan 29 16:34:15.714129 ntpd[1479]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:34:15.714172 ntpd[1479]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 16:34:15.864756 jq[1507]: true Jan 29 16:34:15.887028 (ntainerd)[1509]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:34:15.924070 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 16:34:15.933314 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:34:15.968723 systemd-logind[1497]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 16:34:15.968764 systemd-logind[1497]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 16:34:15.968797 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 16:34:15.976557 systemd-logind[1497]: New seat seat0. Jan 29 16:34:15.987875 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:34:15.997786 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:34:16.014709 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:34:16.025992 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:34:16.026340 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:34:16.026627 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:34:16.049588 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 16:34:16.053290 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:34:16.060582 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:34:16.060858 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:34:16.084791 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:34:16.110268 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:34:16.131281 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:34:16.131584 systemd[1]: Starting sshkeys.service... Jan 29 16:34:16.219812 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:34:16.245105 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:34:16.283556 systemd-networkd[1408]: eth0: Gained IPv6LL Jan 29 16:34:16.289837 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:34:16.302791 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:34:16.320863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:34:16.354958 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:34:16.356243 coreos-metadata[1546]: Jan 29 16:34:16.355 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 29 16:34:16.361684 coreos-metadata[1546]: Jan 29 16:34:16.361 INFO Fetch failed with 404: resource not found Jan 29 16:34:16.361684 coreos-metadata[1546]: Jan 29 16:34:16.361 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 29 16:34:16.362645 coreos-metadata[1546]: Jan 29 16:34:16.362 INFO Fetch successful Jan 29 16:34:16.362645 coreos-metadata[1546]: Jan 29 16:34:16.362 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 29 16:34:16.370596 coreos-metadata[1546]: Jan 29 16:34:16.370 INFO Fetch failed with 404: resource not found Jan 29 16:34:16.370877 coreos-metadata[1546]: Jan 29 16:34:16.370 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 29 16:34:16.371100 coreos-metadata[1546]: Jan 29 16:34:16.370 INFO Fetch failed with 404: resource not found Jan 29 16:34:16.371286 coreos-metadata[1546]: Jan 29 16:34:16.371 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 29 16:34:16.371572 coreos-metadata[1546]: Jan 29 16:34:16.371 INFO Fetch successful Jan 29 16:34:16.374352 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 29 16:34:16.375470 unknown[1546]: wrote ssh authorized keys file for user: core Jan 29 16:34:16.385542 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:34:16.388605 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:34:16.391437 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 16:34:16.393793 dbus-daemon[1472]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1537 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 16:34:16.398006 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 16:34:16.415471 init.sh[1563]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 29 16:34:16.415471 init.sh[1563]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 29 16:34:16.415471 init.sh[1563]: + /usr/bin/google_instance_setup Jan 29 16:34:16.440496 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:34:16.466514 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:34:16.480978 update-ssh-keys[1567]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:34:16.487807 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 16:34:16.514499 systemd[1]: Started sshd@0-10.128.0.87:22-147.75.109.163:51632.service - OpenSSH per-connection server daemon (147.75.109.163:51632). Jan 29 16:34:16.533113 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:34:16.545746 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:34:16.546289 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:34:16.558424 systemd[1]: Finished sshkeys.service. Jan 29 16:34:16.595003 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:34:16.633675 polkitd[1578]: Started polkitd version 121 Jan 29 16:34:16.641969 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:34:16.652674 polkitd[1578]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 16:34:16.652809 polkitd[1578]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 16:34:16.654634 polkitd[1578]: Finished loading, compiling and executing 2 rules Jan 29 16:34:16.657017 dbus-daemon[1472]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 16:34:16.657993 polkitd[1578]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 16:34:16.662240 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:34:16.667506 containerd[1509]: time="2025-01-29T16:34:16.667373160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:34:16.679017 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 16:34:16.690882 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:34:16.703718 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 16:34:16.744991 systemd-hostnamed[1537]: Hostname set to (transient) Jan 29 16:34:16.748153 systemd-resolved[1337]: System hostname changed to 'ci-4230-0-0-fe9f1bd1457e5a9893b6.c.flatcar-212911.internal'. Jan 29 16:34:16.750418 containerd[1509]: time="2025-01-29T16:34:16.748558572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:16.751406 containerd[1509]: time="2025-01-29T16:34:16.751322176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:16.751504 containerd[1509]: time="2025-01-29T16:34:16.751413859Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:34:16.751504 containerd[1509]: time="2025-01-29T16:34:16.751446613Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.751701611Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.751740505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.751839110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.751860580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.752190790Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.752215724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.752240365Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.752257818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.752368422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.752701164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753372 containerd[1509]: time="2025-01-29T16:34:16.752964996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:34:16.753867 containerd[1509]: time="2025-01-29T16:34:16.752992865Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:34:16.753867 containerd[1509]: time="2025-01-29T16:34:16.753136539Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:34:16.753867 containerd[1509]: time="2025-01-29T16:34:16.753210963Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.761871065Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.761973957Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.762008099Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.762035595Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.762107167Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.762365787Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.762772857Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.762959611Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.762986041Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.763014117Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.763035609Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.763056699Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.763094459Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:34:16.763409 containerd[1509]: time="2025-01-29T16:34:16.763120833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763144652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763172630Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763194083Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763218906Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763251763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763276361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763298022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763319683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763340552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.763361323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.764045077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.764096217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764126 containerd[1509]: time="2025-01-29T16:34:16.764124191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764149594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764179312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764215380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764240944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764266963Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764307377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764330683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764349671Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764453948Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764487843Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764589440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764610868Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:34:16.764689 containerd[1509]: time="2025-01-29T16:34:16.764628334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.765229 containerd[1509]: time="2025-01-29T16:34:16.764654932Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:34:16.765229 containerd[1509]: time="2025-01-29T16:34:16.764673276Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:34:16.765229 containerd[1509]: time="2025-01-29T16:34:16.764691306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:34:16.767477 containerd[1509]: time="2025-01-29T16:34:16.765203404Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:34:16.767477 containerd[1509]: time="2025-01-29T16:34:16.765283506Z" level=info msg="Connect containerd service" Jan 29 16:34:16.767477 containerd[1509]: time="2025-01-29T16:34:16.765341221Z" level=info msg="using legacy CRI server" Jan 29 16:34:16.767477 containerd[1509]: time="2025-01-29T16:34:16.765356341Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:34:16.767477 containerd[1509]: time="2025-01-29T16:34:16.765637124Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:34:16.768370 containerd[1509]: time="2025-01-29T16:34:16.768262023Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:34:16.768747 containerd[1509]: time="2025-01-29T16:34:16.768691698Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:34:16.768857 containerd[1509]: time="2025-01-29T16:34:16.768766752Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:34:16.768925 containerd[1509]: time="2025-01-29T16:34:16.768840647Z" level=info msg="Start subscribing containerd event" Jan 29 16:34:16.768925 containerd[1509]: time="2025-01-29T16:34:16.768894299Z" level=info msg="Start recovering state" Jan 29 16:34:16.769013 containerd[1509]: time="2025-01-29T16:34:16.768987460Z" level=info msg="Start event monitor" Jan 29 16:34:16.769059 containerd[1509]: time="2025-01-29T16:34:16.769011755Z" level=info msg="Start snapshots syncer" Jan 29 16:34:16.769059 containerd[1509]: time="2025-01-29T16:34:16.769027215Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:34:16.769059 containerd[1509]: time="2025-01-29T16:34:16.769039908Z" level=info msg="Start streaming server" Jan 29 16:34:16.769262 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:34:16.771520 containerd[1509]: time="2025-01-29T16:34:16.771490561Z" level=info msg="containerd successfully booted in 0.107803s" Jan 29 16:34:16.947583 sshd[1579]: Accepted publickey for core from 147.75.109.163 port 51632 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:16.954316 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:16.970053 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:34:16.985895 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:34:17.012246 systemd-logind[1497]: New session 1 of user core. Jan 29 16:34:17.036938 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:34:17.063338 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:34:17.097236 (systemd)[1603]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:34:17.102063 systemd-logind[1497]: New session c1 of user core. Jan 29 16:34:17.349602 instance-setup[1568]: INFO Running google_set_multiqueue. Jan 29 16:34:17.382963 instance-setup[1568]: INFO Set channels for eth0 to 2. Jan 29 16:34:17.389366 instance-setup[1568]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 29 16:34:17.391902 instance-setup[1568]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 29 16:34:17.392715 instance-setup[1568]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 29 16:34:17.394991 instance-setup[1568]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 29 16:34:17.395949 instance-setup[1568]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 29 16:34:17.397903 instance-setup[1568]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 29 16:34:17.398558 instance-setup[1568]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 29 16:34:17.400788 instance-setup[1568]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 29 16:34:17.411902 instance-setup[1568]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 29 16:34:17.418301 instance-setup[1568]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 29 16:34:17.420764 instance-setup[1568]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 29 16:34:17.420816 instance-setup[1568]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 29 16:34:17.436625 systemd[1603]: Queued start job for default target default.target. Jan 29 16:34:17.446356 systemd[1603]: Created slice app.slice - User Application Slice. Jan 29 16:34:17.447814 systemd[1603]: Reached target paths.target - Paths. Jan 29 16:34:17.447942 systemd[1603]: Reached target timers.target - Timers. Jan 29 16:34:17.458415 init.sh[1563]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 29 16:34:17.460609 systemd[1603]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:34:17.476238 systemd[1603]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:34:17.476659 systemd[1603]: Reached target sockets.target - Sockets. Jan 29 16:34:17.477009 systemd[1603]: Reached target basic.target - Basic System. Jan 29 16:34:17.477269 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:34:17.477611 systemd[1603]: Reached target default.target - Main User Target. Jan 29 16:34:17.477671 systemd[1603]: Startup finished in 357ms. Jan 29 16:34:17.496734 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:34:17.641697 startup-script[1640]: INFO Starting startup scripts. Jan 29 16:34:17.650088 startup-script[1640]: INFO No startup scripts found in metadata. Jan 29 16:34:17.650186 startup-script[1640]: INFO Finished running startup scripts. Jan 29 16:34:17.706661 init.sh[1563]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 29 16:34:17.706800 init.sh[1563]: + daemon_pids=() Jan 29 16:34:17.706800 init.sh[1563]: + for d in accounts clock_skew network Jan 29 16:34:17.707063 init.sh[1563]: + daemon_pids+=($!) Jan 29 16:34:17.707149 init.sh[1563]: + for d in accounts clock_skew network Jan 29 16:34:17.707698 init.sh[1646]: + /usr/bin/google_accounts_daemon Jan 29 16:34:17.708123 init.sh[1647]: + /usr/bin/google_clock_skew_daemon Jan 29 16:34:17.708422 init.sh[1563]: + daemon_pids+=($!) Jan 29 16:34:17.708422 init.sh[1563]: + for d in accounts clock_skew network Jan 29 16:34:17.712269 init.sh[1563]: + daemon_pids+=($!) Jan 29 16:34:17.712269 init.sh[1563]: + NOTIFY_SOCKET=/run/systemd/notify Jan 29 16:34:17.712269 init.sh[1563]: + /usr/bin/systemd-notify --ready Jan 29 16:34:17.712509 init.sh[1648]: + /usr/bin/google_network_daemon Jan 29 16:34:17.745775 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 29 16:34:17.765823 init.sh[1563]: + wait -n 1646 1647 1648 Jan 29 16:34:17.766894 systemd[1]: Started sshd@1-10.128.0.87:22-147.75.109.163:48366.service - OpenSSH per-connection server daemon (147.75.109.163:48366). Jan 29 16:34:18.123854 google-networking[1648]: INFO Starting Google Networking daemon. Jan 29 16:34:18.142724 sshd[1651]: Accepted publickey for core from 147.75.109.163 port 48366 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:18.144429 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:18.164059 systemd-logind[1497]: New session 2 of user core. Jan 29 16:34:18.168655 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:34:18.212612 google-clock-skew[1647]: INFO Starting Google Clock Skew daemon. Jan 29 16:34:18.219087 google-clock-skew[1647]: INFO Clock drift token has changed: 0. Jan 29 16:34:18.249550 groupadd[1661]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 29 16:34:18.255562 groupadd[1661]: group added to /etc/gshadow: name=google-sudoers Jan 29 16:34:18.320482 groupadd[1661]: new group: name=google-sudoers, GID=1000 Jan 29 16:34:18.355608 google-accounts[1646]: INFO Starting Google Accounts daemon. Jan 29 16:34:18.000453 systemd-resolved[1337]: Clock change detected. Flushing caches. Jan 29 16:34:18.028717 systemd-journald[1118]: Time jumped backwards, rotating. Jan 29 16:34:18.029923 sshd[1660]: Connection closed by 147.75.109.163 port 48366 Jan 29 16:34:18.020779 systemd[1]: sshd@1-10.128.0.87:22-147.75.109.163:48366.service: Deactivated successfully. Jan 29 16:34:18.001887 google-clock-skew[1647]: INFO Synced system time with hardware clock. Jan 29 16:34:18.024708 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:34:18.014086 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Jan 29 16:34:18.027857 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:34:18.018527 google-accounts[1646]: WARNING OS Login not installed. Jan 29 16:34:18.023334 google-accounts[1646]: INFO Creating a new user account for 0. Jan 29 16:34:18.034291 systemd-logind[1497]: Removed session 2. Jan 29 16:34:18.037836 init.sh[1672]: useradd: invalid user name '0': use --badname to ignore Jan 29 16:34:18.038627 google-accounts[1646]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 29 16:34:18.075748 systemd[1]: Started sshd@2-10.128.0.87:22-147.75.109.163:48370.service - OpenSSH per-connection server daemon (147.75.109.163:48370). Jan 29 16:34:18.151093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:34:18.163219 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:34:18.169686 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:34:18.174263 systemd[1]: Startup finished in 1.068s (kernel) + 16.300s (initrd) + 9.534s (userspace) = 26.904s. Jan 29 16:34:18.332519 ntpd[1479]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:57%2]:123 Jan 29 16:34:18.333159 ntpd[1479]: 29 Jan 16:34:18 ntpd[1479]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:57%2]:123 Jan 29 16:34:18.405012 sshd[1678]: Accepted publickey for core from 147.75.109.163 port 48370 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:18.406441 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:18.415953 systemd-logind[1497]: New session 3 of user core. Jan 29 16:34:18.421087 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:34:18.621968 sshd[1694]: Connection closed by 147.75.109.163 port 48370 Jan 29 16:34:18.622968 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jan 29 16:34:18.629141 systemd[1]: sshd@2-10.128.0.87:22-147.75.109.163:48370.service: Deactivated successfully. Jan 29 16:34:18.632604 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:34:18.633878 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:34:18.635361 systemd-logind[1497]: Removed session 3. Jan 29 16:34:19.080806 kubelet[1685]: E0129 16:34:19.080309 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:34:19.083257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:34:19.083518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:34:19.084066 systemd[1]: kubelet.service: Consumed 1.213s CPU time, 236.8M memory peak. Jan 29 16:34:28.684271 systemd[1]: Started sshd@3-10.128.0.87:22-147.75.109.163:50754.service - OpenSSH per-connection server daemon (147.75.109.163:50754). Jan 29 16:34:28.973595 sshd[1702]: Accepted publickey for core from 147.75.109.163 port 50754 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:28.975480 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:28.983005 systemd-logind[1497]: New session 4 of user core. Jan 29 16:34:28.989081 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:34:29.145591 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:34:29.151124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:34:29.186869 sshd[1704]: Connection closed by 147.75.109.163 port 50754 Jan 29 16:34:29.187625 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Jan 29 16:34:29.192164 systemd[1]: sshd@3-10.128.0.87:22-147.75.109.163:50754.service: Deactivated successfully. Jan 29 16:34:29.194556 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:34:29.196716 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:34:29.198335 systemd-logind[1497]: Removed session 4. Jan 29 16:34:29.248265 systemd[1]: Started sshd@4-10.128.0.87:22-147.75.109.163:50768.service - OpenSSH per-connection server daemon (147.75.109.163:50768). Jan 29 16:34:29.445259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:34:29.456450 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:34:29.510241 kubelet[1720]: E0129 16:34:29.510085 1720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:34:29.514816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:34:29.515108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:34:29.515670 systemd[1]: kubelet.service: Consumed 184ms CPU time, 98.2M memory peak. Jan 29 16:34:29.553050 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 50768 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:29.554741 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:29.560869 systemd-logind[1497]: New session 5 of user core. Jan 29 16:34:29.572202 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:34:29.765241 sshd[1728]: Connection closed by 147.75.109.163 port 50768 Jan 29 16:34:29.766472 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Jan 29 16:34:29.772124 systemd[1]: sshd@4-10.128.0.87:22-147.75.109.163:50768.service: Deactivated successfully. Jan 29 16:34:29.774464 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:34:29.775613 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:34:29.777707 systemd-logind[1497]: Removed session 5. Jan 29 16:34:29.824257 systemd[1]: Started sshd@5-10.128.0.87:22-147.75.109.163:50782.service - OpenSSH per-connection server daemon (147.75.109.163:50782). Jan 29 16:34:30.112411 sshd[1734]: Accepted publickey for core from 147.75.109.163 port 50782 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:30.114242 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:30.120239 systemd-logind[1497]: New session 6 of user core. Jan 29 16:34:30.129109 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:34:30.323870 sshd[1736]: Connection closed by 147.75.109.163 port 50782 Jan 29 16:34:30.324734 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Jan 29 16:34:30.330190 systemd[1]: sshd@5-10.128.0.87:22-147.75.109.163:50782.service: Deactivated successfully. Jan 29 16:34:30.332618 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:34:30.333890 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:34:30.335354 systemd-logind[1497]: Removed session 6. Jan 29 16:34:30.385249 systemd[1]: Started sshd@6-10.128.0.87:22-147.75.109.163:50794.service - OpenSSH per-connection server daemon (147.75.109.163:50794). Jan 29 16:34:30.677662 sshd[1742]: Accepted publickey for core from 147.75.109.163 port 50794 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:30.679413 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:30.685339 systemd-logind[1497]: New session 7 of user core. Jan 29 16:34:30.696216 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:34:30.874066 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:34:30.874602 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:34:30.895025 sudo[1745]: pam_unix(sudo:session): session closed for user root Jan 29 16:34:30.938039 sshd[1744]: Connection closed by 147.75.109.163 port 50794 Jan 29 16:34:30.939501 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 29 16:34:30.944918 systemd[1]: sshd@6-10.128.0.87:22-147.75.109.163:50794.service: Deactivated successfully. Jan 29 16:34:30.947410 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:34:30.949603 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:34:30.951596 systemd-logind[1497]: Removed session 7. Jan 29 16:34:30.994292 systemd[1]: Started sshd@7-10.128.0.87:22-147.75.109.163:50806.service - OpenSSH per-connection server daemon (147.75.109.163:50806). Jan 29 16:34:31.286004 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 50806 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:31.287559 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:31.293953 systemd-logind[1497]: New session 8 of user core. Jan 29 16:34:31.302218 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:34:31.465222 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:34:31.465735 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:34:31.471470 sudo[1755]: pam_unix(sudo:session): session closed for user root Jan 29 16:34:31.486388 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:34:31.486915 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:34:31.511814 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:34:31.551797 augenrules[1777]: No rules Jan 29 16:34:31.552308 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:34:31.552640 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:34:31.554381 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 29 16:34:31.596942 sshd[1753]: Connection closed by 147.75.109.163 port 50806 Jan 29 16:34:31.597760 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Jan 29 16:34:31.603312 systemd[1]: sshd@7-10.128.0.87:22-147.75.109.163:50806.service: Deactivated successfully. Jan 29 16:34:31.605679 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:34:31.606746 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:34:31.608303 systemd-logind[1497]: Removed session 8. Jan 29 16:34:31.654597 systemd[1]: Started sshd@8-10.128.0.87:22-147.75.109.163:50816.service - OpenSSH per-connection server daemon (147.75.109.163:50816). Jan 29 16:34:31.956551 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 50816 ssh2: RSA SHA256:eAiu0eQJm+e+QIlxuxvnIeYtdbt4irdBjVTy2hTYMiQ Jan 29 16:34:31.958128 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:34:31.964134 systemd-logind[1497]: New session 9 of user core. Jan 29 16:34:31.975149 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:34:32.135728 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:34:32.136273 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:34:32.994800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:34:32.995410 systemd[1]: kubelet.service: Consumed 184ms CPU time, 98.2M memory peak. Jan 29 16:34:33.010369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:34:33.053730 systemd[1]: Reload requested from client PID 1822 ('systemctl') (unit session-9.scope)... Jan 29 16:34:33.053766 systemd[1]: Reloading... Jan 29 16:34:33.245908 zram_generator::config[1870]: No configuration found. Jan 29 16:34:33.389889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:34:33.534111 systemd[1]: Reloading finished in 479 ms. Jan 29 16:34:33.597914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:34:33.613529 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:34:33.616991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:34:33.617694 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:34:33.618074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:34:33.618153 systemd[1]: kubelet.service: Consumed 128ms CPU time, 84.5M memory peak. Jan 29 16:34:33.627321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:34:33.852162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:34:33.853758 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:34:33.916263 kubelet[1921]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:34:33.917004 kubelet[1921]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:34:33.917004 kubelet[1921]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:34:33.919564 kubelet[1921]: I0129 16:34:33.919173 1921 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:34:34.780274 kubelet[1921]: I0129 16:34:34.780214 1921 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:34:34.780274 kubelet[1921]: I0129 16:34:34.780252 1921 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:34:34.780633 kubelet[1921]: I0129 16:34:34.780596 1921 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:34:34.819316 kubelet[1921]: I0129 16:34:34.818313 1921 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:34:34.842632 kubelet[1921]: E0129 16:34:34.842563 1921 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:34:34.842632 kubelet[1921]: I0129 16:34:34.842619 1921 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:34:34.848584 kubelet[1921]: I0129 16:34:34.848550 1921 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:34:34.848734 kubelet[1921]: I0129 16:34:34.848708 1921 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:34:34.849071 kubelet[1921]: I0129 16:34:34.849013 1921 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:34:34.849318 kubelet[1921]: I0129 16:34:34.849059 1921 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.128.0.87","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:34:34.849516 kubelet[1921]: I0129 16:34:34.849329 1921 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:34:34.849516 kubelet[1921]: I0129 16:34:34.849347 1921 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:34:34.849516 kubelet[1921]: I0129 16:34:34.849490 1921 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:34:34.852416 kubelet[1921]: I0129 16:34:34.852014 1921 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:34:34.852416 kubelet[1921]: I0129 16:34:34.852053 1921 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:34:34.852416 kubelet[1921]: I0129 16:34:34.852101 1921 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:34:34.852416 kubelet[1921]: I0129 16:34:34.852122 1921 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:34:34.852703 kubelet[1921]: E0129 16:34:34.852634 1921 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:34.852703 kubelet[1921]: E0129 16:34:34.852690 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:34.859601 kubelet[1921]: I0129 16:34:34.859433 1921 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:34:34.861978 kubelet[1921]: I0129 16:34:34.861936 1921 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:34:34.865020 kubelet[1921]: W0129 16:34:34.863699 1921 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:34:34.865020 kubelet[1921]: I0129 16:34:34.864539 1921 server.go:1269] "Started kubelet" Jan 29 16:34:34.868170 kubelet[1921]: I0129 16:34:34.867650 1921 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:34:34.869812 kubelet[1921]: I0129 16:34:34.868925 1921 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:34:34.872675 kubelet[1921]: I0129 16:34:34.872623 1921 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:34:34.874664 kubelet[1921]: I0129 16:34:34.874423 1921 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:34:34.874772 kubelet[1921]: I0129 16:34:34.874743 1921 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:34:34.876348 kubelet[1921]: I0129 16:34:34.876022 1921 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:34:34.880475 kubelet[1921]: E0129 16:34:34.880161 1921 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.128.0.87\" not found" Jan 29 16:34:34.880655 kubelet[1921]: I0129 16:34:34.880611 1921 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:34:34.880898 kubelet[1921]: I0129 16:34:34.880877 1921 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:34:34.880981 kubelet[1921]: I0129 16:34:34.880971 1921 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:34:34.881518 kubelet[1921]: E0129 16:34:34.881372 1921 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:34:34.882103 kubelet[1921]: I0129 16:34:34.881895 1921 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:34:34.884207 kubelet[1921]: I0129 16:34:34.884182 1921 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:34:34.884556 kubelet[1921]: I0129 16:34:34.884452 1921 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:34:34.893161 kubelet[1921]: E0129 16:34:34.892170 1921 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.87\" not found" node="10.128.0.87" Jan 29 16:34:34.906291 kubelet[1921]: I0129 16:34:34.906263 1921 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:34:34.906696 kubelet[1921]: I0129 16:34:34.906677 1921 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:34:34.906858 kubelet[1921]: I0129 16:34:34.906818 1921 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:34:34.910096 kubelet[1921]: I0129 16:34:34.910068 1921 policy_none.go:49] "None policy: Start" Jan 29 16:34:34.911815 kubelet[1921]: I0129 16:34:34.911761 1921 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:34:34.912337 kubelet[1921]: I0129 16:34:34.912106 1921 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:34:34.939730 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:34:34.966424 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:34:34.974289 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:34:34.981747 kubelet[1921]: E0129 16:34:34.981637 1921 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.128.0.87\" not found" Jan 29 16:34:34.984413 kubelet[1921]: I0129 16:34:34.984366 1921 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:34:34.987737 kubelet[1921]: I0129 16:34:34.984621 1921 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:34:34.987737 kubelet[1921]: I0129 16:34:34.984643 1921 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:34:34.987737 kubelet[1921]: I0129 16:34:34.984996 1921 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:34:34.991408 kubelet[1921]: E0129 16:34:34.991384 1921 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.87\" not found" Jan 29 16:34:35.001130 kubelet[1921]: I0129 16:34:35.001066 1921 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:34:35.004308 kubelet[1921]: I0129 16:34:35.004272 1921 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:34:35.004542 kubelet[1921]: I0129 16:34:35.004484 1921 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:34:35.004542 kubelet[1921]: I0129 16:34:35.004518 1921 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:34:35.005440 kubelet[1921]: E0129 16:34:35.004925 1921 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 16:34:35.086953 kubelet[1921]: I0129 16:34:35.086799 1921 kubelet_node_status.go:72] "Attempting to register node" node="10.128.0.87" Jan 29 16:34:35.095677 kubelet[1921]: I0129 16:34:35.095622 1921 kubelet_node_status.go:75] "Successfully registered node" node="10.128.0.87" Jan 29 16:34:35.208969 kubelet[1921]: I0129 16:34:35.208884 1921 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 16:34:35.209525 containerd[1509]: time="2025-01-29T16:34:35.209330520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:34:35.210674 kubelet[1921]: I0129 16:34:35.210389 1921 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 16:34:35.396776 sudo[1789]: pam_unix(sudo:session): session closed for user root Jan 29 16:34:35.439376 sshd[1788]: Connection closed by 147.75.109.163 port 50816 Jan 29 16:34:35.440405 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Jan 29 16:34:35.445451 systemd[1]: sshd@8-10.128.0.87:22-147.75.109.163:50816.service: Deactivated successfully. Jan 29 16:34:35.448284 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:34:35.448891 systemd[1]: session-9.scope: Consumed 606ms CPU time, 77.3M memory peak. Jan 29 16:34:35.452796 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:34:35.454945 systemd-logind[1497]: Removed session 9. Jan 29 16:34:35.782664 kubelet[1921]: I0129 16:34:35.782484 1921 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 16:34:35.782853 kubelet[1921]: W0129 16:34:35.782760 1921 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:34:35.783240 kubelet[1921]: W0129 16:34:35.783190 1921 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:34:35.783240 kubelet[1921]: W0129 16:34:35.783245 1921 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 16:34:35.853616 kubelet[1921]: I0129 16:34:35.853528 1921 apiserver.go:52] "Watching apiserver" Jan 29 16:34:35.853616 kubelet[1921]: E0129 16:34:35.853523 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:35.860808 kubelet[1921]: E0129 16:34:35.860248 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:35.877914 systemd[1]: Created slice kubepods-besteffort-pod08efb8a0_3b9c_4354_9b81_5e5792f347a0.slice - libcontainer container kubepods-besteffort-pod08efb8a0_3b9c_4354_9b81_5e5792f347a0.slice. Jan 29 16:34:35.882861 kubelet[1921]: I0129 16:34:35.881956 1921 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:34:35.889764 kubelet[1921]: I0129 16:34:35.889723 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-lib-modules\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.889973 kubelet[1921]: I0129 16:34:35.889946 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-xtables-lock\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890035 kubelet[1921]: I0129 16:34:35.889993 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h4h7\" (UniqueName: \"kubernetes.io/projected/08efb8a0-3b9c-4354-9b81-5e5792f347a0-kube-api-access-8h4h7\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890095 kubelet[1921]: I0129 16:34:35.890033 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ca0d836-86b3-4c6c-9e65-2af0fbd013c3-xtables-lock\") pod \"kube-proxy-jn586\" (UID: \"6ca0d836-86b3-4c6c-9e65-2af0fbd013c3\") " pod="kube-system/kube-proxy-jn586" Jan 29 16:34:35.890095 kubelet[1921]: I0129 16:34:35.890071 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/08efb8a0-3b9c-4354-9b81-5e5792f347a0-node-certs\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890188 kubelet[1921]: I0129 16:34:35.890100 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-cni-bin-dir\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890188 kubelet[1921]: I0129 16:34:35.890135 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-cni-log-dir\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890188 kubelet[1921]: I0129 16:34:35.890171 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec43a3f5-5f5f-4f82-a768-b19afc7730bd-kubelet-dir\") pod \"csi-node-driver-xz4l2\" (UID: \"ec43a3f5-5f5f-4f82-a768-b19afc7730bd\") " pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:35.890332 kubelet[1921]: I0129 16:34:35.890207 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ec43a3f5-5f5f-4f82-a768-b19afc7730bd-socket-dir\") pod \"csi-node-driver-xz4l2\" (UID: \"ec43a3f5-5f5f-4f82-a768-b19afc7730bd\") " pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:35.890332 kubelet[1921]: I0129 16:34:35.890242 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ec43a3f5-5f5f-4f82-a768-b19afc7730bd-registration-dir\") pod \"csi-node-driver-xz4l2\" (UID: \"ec43a3f5-5f5f-4f82-a768-b19afc7730bd\") " pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:35.890332 kubelet[1921]: I0129 16:34:35.890271 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ca0d836-86b3-4c6c-9e65-2af0fbd013c3-lib-modules\") pod \"kube-proxy-jn586\" (UID: \"6ca0d836-86b3-4c6c-9e65-2af0fbd013c3\") " pod="kube-system/kube-proxy-jn586" Jan 29 16:34:35.890332 kubelet[1921]: I0129 16:34:35.890306 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-policysync\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890512 kubelet[1921]: I0129 16:34:35.890343 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-var-run-calico\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890512 kubelet[1921]: I0129 16:34:35.890378 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-var-lib-calico\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890512 kubelet[1921]: I0129 16:34:35.890407 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-cni-net-dir\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890512 kubelet[1921]: I0129 16:34:35.890453 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ec43a3f5-5f5f-4f82-a768-b19afc7730bd-varrun\") pod \"csi-node-driver-xz4l2\" (UID: \"ec43a3f5-5f5f-4f82-a768-b19afc7730bd\") " pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:35.890512 kubelet[1921]: I0129 16:34:35.890486 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljnv5\" (UniqueName: \"kubernetes.io/projected/ec43a3f5-5f5f-4f82-a768-b19afc7730bd-kube-api-access-ljnv5\") pod \"csi-node-driver-xz4l2\" (UID: \"ec43a3f5-5f5f-4f82-a768-b19afc7730bd\") " pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:35.890736 kubelet[1921]: I0129 16:34:35.890524 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltxst\" (UniqueName: \"kubernetes.io/projected/6ca0d836-86b3-4c6c-9e65-2af0fbd013c3-kube-api-access-ltxst\") pod \"kube-proxy-jn586\" (UID: \"6ca0d836-86b3-4c6c-9e65-2af0fbd013c3\") " pod="kube-system/kube-proxy-jn586" Jan 29 16:34:35.890736 kubelet[1921]: I0129 16:34:35.890561 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08efb8a0-3b9c-4354-9b81-5e5792f347a0-tigera-ca-bundle\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890736 kubelet[1921]: I0129 16:34:35.890593 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/08efb8a0-3b9c-4354-9b81-5e5792f347a0-flexvol-driver-host\") pod \"calico-node-mj245\" (UID: \"08efb8a0-3b9c-4354-9b81-5e5792f347a0\") " pod="calico-system/calico-node-mj245" Jan 29 16:34:35.890736 kubelet[1921]: I0129 16:34:35.890626 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ca0d836-86b3-4c6c-9e65-2af0fbd013c3-kube-proxy\") pod \"kube-proxy-jn586\" (UID: \"6ca0d836-86b3-4c6c-9e65-2af0fbd013c3\") " pod="kube-system/kube-proxy-jn586" Jan 29 16:34:35.896148 systemd[1]: Created slice kubepods-besteffort-pod6ca0d836_86b3_4c6c_9e65_2af0fbd013c3.slice - libcontainer container kubepods-besteffort-pod6ca0d836_86b3_4c6c_9e65_2af0fbd013c3.slice. Jan 29 16:34:35.994945 kubelet[1921]: E0129 16:34:35.994889 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:35.995577 kubelet[1921]: W0129 16:34:35.995018 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:35.995577 kubelet[1921]: E0129 16:34:35.995061 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:35.996751 kubelet[1921]: E0129 16:34:35.996214 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:35.996751 kubelet[1921]: W0129 16:34:35.996281 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:35.996751 kubelet[1921]: E0129 16:34:35.996323 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:35.997629 kubelet[1921]: E0129 16:34:35.997376 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:35.997629 kubelet[1921]: W0129 16:34:35.997393 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:35.997930 kubelet[1921]: E0129 16:34:35.997773 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:35.998443 kubelet[1921]: E0129 16:34:35.998335 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:35.998443 kubelet[1921]: W0129 16:34:35.998375 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:35.998694 kubelet[1921]: E0129 16:34:35.998536 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:35.999278 kubelet[1921]: E0129 16:34:35.999165 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:35.999278 kubelet[1921]: W0129 16:34:35.999203 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:35.999278 kubelet[1921]: E0129 16:34:35.999227 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:36.000166 kubelet[1921]: E0129 16:34:35.999924 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:36.000166 kubelet[1921]: W0129 16:34:35.999964 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:36.000526 kubelet[1921]: E0129 16:34:36.000507 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:36.000679 kubelet[1921]: W0129 16:34:36.000605 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:36.000679 kubelet[1921]: E0129 16:34:36.000630 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:36.000984 kubelet[1921]: E0129 16:34:36.000006 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:36.001417 kubelet[1921]: E0129 16:34:36.001357 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:36.001417 kubelet[1921]: W0129 16:34:36.001375 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:36.002133 kubelet[1921]: E0129 16:34:36.002058 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:36.002592 kubelet[1921]: E0129 16:34:36.002510 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:36.002592 kubelet[1921]: W0129 16:34:36.002536 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:36.002592 kubelet[1921]: E0129 16:34:36.002553 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:36.025132 kubelet[1921]: E0129 16:34:36.023083 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:36.025132 kubelet[1921]: W0129 16:34:36.023110 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:36.025132 kubelet[1921]: E0129 16:34:36.023140 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:36.039620 kubelet[1921]: E0129 16:34:36.037110 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:36.039620 kubelet[1921]: W0129 16:34:36.037138 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:36.039620 kubelet[1921]: E0129 16:34:36.037169 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:36.042217 kubelet[1921]: E0129 16:34:36.042067 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:36.042217 kubelet[1921]: W0129 16:34:36.042113 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:36.042217 kubelet[1921]: E0129 16:34:36.042144 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:36.190814 containerd[1509]: time="2025-01-29T16:34:36.190725871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mj245,Uid:08efb8a0-3b9c-4354-9b81-5e5792f347a0,Namespace:calico-system,Attempt:0,}" Jan 29 16:34:36.199741 containerd[1509]: time="2025-01-29T16:34:36.199676661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jn586,Uid:6ca0d836-86b3-4c6c-9e65-2af0fbd013c3,Namespace:kube-system,Attempt:0,}" Jan 29 16:34:36.701489 containerd[1509]: time="2025-01-29T16:34:36.701392068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:34:36.704515 containerd[1509]: time="2025-01-29T16:34:36.704433615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 29 16:34:36.705665 containerd[1509]: time="2025-01-29T16:34:36.705601569Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:34:36.707374 containerd[1509]: time="2025-01-29T16:34:36.707307899Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:34:36.708412 containerd[1509]: time="2025-01-29T16:34:36.708332126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:34:36.711947 containerd[1509]: time="2025-01-29T16:34:36.711902700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:34:36.713816 containerd[1509]: time="2025-01-29T16:34:36.713151117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.220438ms" Jan 29 16:34:36.714662 containerd[1509]: time="2025-01-29T16:34:36.714613684Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.799456ms" Jan 29 16:34:36.854079 kubelet[1921]: E0129 16:34:36.854003 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:36.883807 containerd[1509]: time="2025-01-29T16:34:36.883528771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:34:36.883807 containerd[1509]: time="2025-01-29T16:34:36.883693496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:34:36.884318 containerd[1509]: time="2025-01-29T16:34:36.883767373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:34:36.886856 containerd[1509]: time="2025-01-29T16:34:36.886432187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:34:36.887552 containerd[1509]: time="2025-01-29T16:34:36.887453393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:34:36.887672 containerd[1509]: time="2025-01-29T16:34:36.887590488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:34:36.887729 containerd[1509]: time="2025-01-29T16:34:36.887666320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:34:36.888262 containerd[1509]: time="2025-01-29T16:34:36.887865921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:34:37.007609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3391871853.mount: Deactivated successfully. Jan 29 16:34:37.019091 systemd[1]: Started cri-containerd-25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24.scope - libcontainer container 25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24. Jan 29 16:34:37.022170 systemd[1]: Started cri-containerd-ffe2940d5174bb72113fc233b3aef076c9fa4fb8e3200b4aef7b96c675c8febb.scope - libcontainer container ffe2940d5174bb72113fc233b3aef076c9fa4fb8e3200b4aef7b96c675c8febb. Jan 29 16:34:37.073492 containerd[1509]: time="2025-01-29T16:34:37.073299788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jn586,Uid:6ca0d836-86b3-4c6c-9e65-2af0fbd013c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffe2940d5174bb72113fc233b3aef076c9fa4fb8e3200b4aef7b96c675c8febb\"" Jan 29 16:34:37.076407 containerd[1509]: time="2025-01-29T16:34:37.076057916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mj245,Uid:08efb8a0-3b9c-4354-9b81-5e5792f347a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24\"" Jan 29 16:34:37.080615 containerd[1509]: time="2025-01-29T16:34:37.080481075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:34:37.855252 kubelet[1921]: E0129 16:34:37.855206 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:38.005702 kubelet[1921]: E0129 16:34:38.005478 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:38.341190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995102388.mount: Deactivated successfully. Jan 29 16:34:38.855475 kubelet[1921]: E0129 16:34:38.855402 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:38.989205 containerd[1509]: time="2025-01-29T16:34:38.989129899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:38.990602 containerd[1509]: time="2025-01-29T16:34:38.990512535Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30233023" Jan 29 16:34:38.992347 containerd[1509]: time="2025-01-29T16:34:38.992269912Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:38.995911 containerd[1509]: time="2025-01-29T16:34:38.995809391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:38.997526 containerd[1509]: time="2025-01-29T16:34:38.996889282Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.916345224s" Jan 29 16:34:38.997526 containerd[1509]: time="2025-01-29T16:34:38.996939688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 16:34:39.000479 containerd[1509]: time="2025-01-29T16:34:39.000447923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 16:34:39.001805 containerd[1509]: time="2025-01-29T16:34:39.001750192Z" level=info msg="CreateContainer within sandbox \"ffe2940d5174bb72113fc233b3aef076c9fa4fb8e3200b4aef7b96c675c8febb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:34:39.028535 containerd[1509]: time="2025-01-29T16:34:39.028425495Z" level=info msg="CreateContainer within sandbox \"ffe2940d5174bb72113fc233b3aef076c9fa4fb8e3200b4aef7b96c675c8febb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d5ad069602b96da3aae34ec7074a3f3c09aa09aeda23b5240ebd785500f03f6\"" Jan 29 16:34:39.030864 containerd[1509]: time="2025-01-29T16:34:39.029402464Z" level=info msg="StartContainer for \"6d5ad069602b96da3aae34ec7074a3f3c09aa09aeda23b5240ebd785500f03f6\"" Jan 29 16:34:39.086119 systemd[1]: Started cri-containerd-6d5ad069602b96da3aae34ec7074a3f3c09aa09aeda23b5240ebd785500f03f6.scope - libcontainer container 6d5ad069602b96da3aae34ec7074a3f3c09aa09aeda23b5240ebd785500f03f6. Jan 29 16:34:39.135046 containerd[1509]: time="2025-01-29T16:34:39.134351168Z" level=info msg="StartContainer for \"6d5ad069602b96da3aae34ec7074a3f3c09aa09aeda23b5240ebd785500f03f6\" returns successfully" Jan 29 16:34:39.855798 kubelet[1921]: E0129 16:34:39.855725 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:39.912634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3186747590.mount: Deactivated successfully. Jan 29 16:34:40.005536 kubelet[1921]: E0129 16:34:40.005464 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:40.050665 containerd[1509]: time="2025-01-29T16:34:40.050596297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:40.052048 containerd[1509]: time="2025-01-29T16:34:40.051969457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 16:34:40.053589 containerd[1509]: time="2025-01-29T16:34:40.053493120Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:40.058611 containerd[1509]: time="2025-01-29T16:34:40.057099665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:40.058611 containerd[1509]: time="2025-01-29T16:34:40.058419717Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.057843001s" Jan 29 16:34:40.058611 containerd[1509]: time="2025-01-29T16:34:40.058465507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 16:34:40.061977 containerd[1509]: time="2025-01-29T16:34:40.061915162Z" level=info msg="CreateContainer within sandbox \"25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 16:34:40.092064 containerd[1509]: time="2025-01-29T16:34:40.092003152Z" level=info msg="CreateContainer within sandbox \"25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e\"" Jan 29 16:34:40.092966 containerd[1509]: time="2025-01-29T16:34:40.092929303Z" level=info msg="StartContainer for \"a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e\"" Jan 29 16:34:40.108176 kubelet[1921]: E0129 16:34:40.108034 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.108176 kubelet[1921]: W0129 16:34:40.108068 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.108176 kubelet[1921]: E0129 16:34:40.108096 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.109710 kubelet[1921]: E0129 16:34:40.108944 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.109710 kubelet[1921]: W0129 16:34:40.108966 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.109710 kubelet[1921]: E0129 16:34:40.108987 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.111384 kubelet[1921]: E0129 16:34:40.111123 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.111384 kubelet[1921]: W0129 16:34:40.111142 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.111384 kubelet[1921]: E0129 16:34:40.111163 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.115914 kubelet[1921]: E0129 16:34:40.115887 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.115914 kubelet[1921]: W0129 16:34:40.115913 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.116081 kubelet[1921]: E0129 16:34:40.115935 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.117868 kubelet[1921]: E0129 16:34:40.116273 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.117868 kubelet[1921]: W0129 16:34:40.116291 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.117868 kubelet[1921]: E0129 16:34:40.116309 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.117868 kubelet[1921]: E0129 16:34:40.116593 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.117868 kubelet[1921]: W0129 16:34:40.116608 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.117868 kubelet[1921]: E0129 16:34:40.116632 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.117868 kubelet[1921]: E0129 16:34:40.116968 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.117868 kubelet[1921]: W0129 16:34:40.116981 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.117868 kubelet[1921]: E0129 16:34:40.116995 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.117868 kubelet[1921]: E0129 16:34:40.117324 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.120183 kubelet[1921]: W0129 16:34:40.117337 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.120183 kubelet[1921]: E0129 16:34:40.117350 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.120183 kubelet[1921]: E0129 16:34:40.117674 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.120183 kubelet[1921]: W0129 16:34:40.117687 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.120183 kubelet[1921]: E0129 16:34:40.117701 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.120183 kubelet[1921]: E0129 16:34:40.118432 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.120183 kubelet[1921]: W0129 16:34:40.118448 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.120183 kubelet[1921]: E0129 16:34:40.118465 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.120183 kubelet[1921]: E0129 16:34:40.118783 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.120183 kubelet[1921]: W0129 16:34:40.118797 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.122115 kubelet[1921]: E0129 16:34:40.118908 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.122115 kubelet[1921]: E0129 16:34:40.120790 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.122115 kubelet[1921]: W0129 16:34:40.120806 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.122115 kubelet[1921]: E0129 16:34:40.120853 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.122115 kubelet[1921]: E0129 16:34:40.121187 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.122115 kubelet[1921]: W0129 16:34:40.121202 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.122115 kubelet[1921]: E0129 16:34:40.121219 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.122115 kubelet[1921]: E0129 16:34:40.121976 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.122115 kubelet[1921]: W0129 16:34:40.121991 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.122115 kubelet[1921]: E0129 16:34:40.122008 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.122597 kubelet[1921]: E0129 16:34:40.122313 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.122597 kubelet[1921]: W0129 16:34:40.122326 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.122597 kubelet[1921]: E0129 16:34:40.122341 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.122750 kubelet[1921]: E0129 16:34:40.122635 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.122750 kubelet[1921]: W0129 16:34:40.122648 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.122750 kubelet[1921]: E0129 16:34:40.122662 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.123068 kubelet[1921]: E0129 16:34:40.123046 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.123068 kubelet[1921]: W0129 16:34:40.123066 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.123181 kubelet[1921]: E0129 16:34:40.123083 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.123396 kubelet[1921]: E0129 16:34:40.123372 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.123396 kubelet[1921]: W0129 16:34:40.123397 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.123507 kubelet[1921]: E0129 16:34:40.123412 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.123734 kubelet[1921]: E0129 16:34:40.123713 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.123734 kubelet[1921]: W0129 16:34:40.123733 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.123875 kubelet[1921]: E0129 16:34:40.123748 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.124196 kubelet[1921]: E0129 16:34:40.124142 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.124196 kubelet[1921]: W0129 16:34:40.124164 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.124196 kubelet[1921]: E0129 16:34:40.124196 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.124726 kubelet[1921]: E0129 16:34:40.124703 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.124726 kubelet[1921]: W0129 16:34:40.124724 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.125006 kubelet[1921]: E0129 16:34:40.124763 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.125850 kubelet[1921]: E0129 16:34:40.125344 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.125850 kubelet[1921]: W0129 16:34:40.125388 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.125850 kubelet[1921]: E0129 16:34:40.125451 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.126047 kubelet[1921]: E0129 16:34:40.126000 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.126047 kubelet[1921]: W0129 16:34:40.126015 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.126156 kubelet[1921]: E0129 16:34:40.126060 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.126495 kubelet[1921]: E0129 16:34:40.126472 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.126584 kubelet[1921]: W0129 16:34:40.126497 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.126663 kubelet[1921]: E0129 16:34:40.126641 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.127948 kubelet[1921]: E0129 16:34:40.127924 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.128076 kubelet[1921]: W0129 16:34:40.128055 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.128426 kubelet[1921]: E0129 16:34:40.128158 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.128600 kubelet[1921]: E0129 16:34:40.128572 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.128857 kubelet[1921]: W0129 16:34:40.128709 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.128857 kubelet[1921]: E0129 16:34:40.128753 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.129800 kubelet[1921]: E0129 16:34:40.129346 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.129800 kubelet[1921]: W0129 16:34:40.129392 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.129800 kubelet[1921]: E0129 16:34:40.129418 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.129800 kubelet[1921]: E0129 16:34:40.129792 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.130244 kubelet[1921]: W0129 16:34:40.129806 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.130244 kubelet[1921]: E0129 16:34:40.129844 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.132391 kubelet[1921]: E0129 16:34:40.131730 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.132391 kubelet[1921]: W0129 16:34:40.131752 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.132391 kubelet[1921]: E0129 16:34:40.131785 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.132391 kubelet[1921]: E0129 16:34:40.132243 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.132391 kubelet[1921]: W0129 16:34:40.132263 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.132391 kubelet[1921]: E0129 16:34:40.132360 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.133657 kubelet[1921]: E0129 16:34:40.133584 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.133657 kubelet[1921]: W0129 16:34:40.133626 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.133863 kubelet[1921]: E0129 16:34:40.133667 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.134061 kubelet[1921]: E0129 16:34:40.134039 1921 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:34:40.134145 kubelet[1921]: W0129 16:34:40.134061 1921 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:34:40.134145 kubelet[1921]: E0129 16:34:40.134077 1921 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:34:40.151032 systemd[1]: Started cri-containerd-a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e.scope - libcontainer container a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e. Jan 29 16:34:40.193468 containerd[1509]: time="2025-01-29T16:34:40.193293070Z" level=info msg="StartContainer for \"a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e\" returns successfully" Jan 29 16:34:40.210204 systemd[1]: cri-containerd-a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e.scope: Deactivated successfully. Jan 29 16:34:40.643562 containerd[1509]: time="2025-01-29T16:34:40.643451848Z" level=info msg="shim disconnected" id=a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e namespace=k8s.io Jan 29 16:34:40.643562 containerd[1509]: time="2025-01-29T16:34:40.643531159Z" level=warning msg="cleaning up after shim disconnected" id=a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e namespace=k8s.io Jan 29 16:34:40.643562 containerd[1509]: time="2025-01-29T16:34:40.643548232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:34:40.856787 kubelet[1921]: E0129 16:34:40.856714 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:40.863857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a049b5b090b326237b8abdaea2979bbe8fc6c0678d654d745cbdda003672dc0e-rootfs.mount: Deactivated successfully. Jan 29 16:34:41.042202 containerd[1509]: time="2025-01-29T16:34:41.042044388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 16:34:41.061911 kubelet[1921]: I0129 16:34:41.061796 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jn586" podStartSLOduration=4.141962585 podStartE2EDuration="6.061770718s" podCreationTimestamp="2025-01-29 16:34:35 +0000 UTC" firstStartedPulling="2025-01-29 16:34:37.079808231 +0000 UTC m=+3.216226614" lastFinishedPulling="2025-01-29 16:34:38.999616335 +0000 UTC m=+5.136034747" observedRunningTime="2025-01-29 16:34:40.048958112 +0000 UTC m=+6.185376504" watchObservedRunningTime="2025-01-29 16:34:41.061770718 +0000 UTC m=+7.198189111" Jan 29 16:34:41.857415 kubelet[1921]: E0129 16:34:41.857343 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:42.005722 kubelet[1921]: E0129 16:34:42.005023 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:42.857617 kubelet[1921]: E0129 16:34:42.857566 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:43.857871 kubelet[1921]: E0129 16:34:43.857802 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:44.005987 kubelet[1921]: E0129 16:34:44.005917 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:44.785203 containerd[1509]: time="2025-01-29T16:34:44.785111619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:44.786606 containerd[1509]: time="2025-01-29T16:34:44.786531408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 16:34:44.788369 containerd[1509]: time="2025-01-29T16:34:44.788289960Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:44.792001 containerd[1509]: time="2025-01-29T16:34:44.791918286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:44.793328 containerd[1509]: time="2025-01-29T16:34:44.793003831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.750902406s" Jan 29 16:34:44.793328 containerd[1509]: time="2025-01-29T16:34:44.793049834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 16:34:44.796047 containerd[1509]: time="2025-01-29T16:34:44.796005036Z" level=info msg="CreateContainer within sandbox \"25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:34:44.822170 containerd[1509]: time="2025-01-29T16:34:44.822087536Z" level=info msg="CreateContainer within sandbox \"25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df\"" Jan 29 16:34:44.822993 containerd[1509]: time="2025-01-29T16:34:44.822933737Z" level=info msg="StartContainer for \"525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df\"" Jan 29 16:34:44.860268 kubelet[1921]: E0129 16:34:44.860203 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:44.869214 systemd[1]: Started cri-containerd-525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df.scope - libcontainer container 525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df. Jan 29 16:34:44.912015 containerd[1509]: time="2025-01-29T16:34:44.911810889Z" level=info msg="StartContainer for \"525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df\" returns successfully" Jan 29 16:34:45.775133 containerd[1509]: time="2025-01-29T16:34:45.775053074Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:34:45.778169 systemd[1]: cri-containerd-525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df.scope: Deactivated successfully. Jan 29 16:34:45.778570 systemd[1]: cri-containerd-525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df.scope: Consumed 595ms CPU time, 172.8M memory peak, 151M written to disk. Jan 29 16:34:45.820113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df-rootfs.mount: Deactivated successfully. Jan 29 16:34:45.825600 kubelet[1921]: I0129 16:34:45.825561 1921 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:34:45.860707 kubelet[1921]: E0129 16:34:45.860647 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:46.013242 systemd[1]: Created slice kubepods-besteffort-podec43a3f5_5f5f_4f82_a768_b19afc7730bd.slice - libcontainer container kubepods-besteffort-podec43a3f5_5f5f_4f82_a768_b19afc7730bd.slice. Jan 29 16:34:46.020811 containerd[1509]: time="2025-01-29T16:34:46.020348291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:0,}" Jan 29 16:34:46.407347 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 16:34:46.582050 containerd[1509]: time="2025-01-29T16:34:46.581962483Z" level=info msg="shim disconnected" id=525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df namespace=k8s.io Jan 29 16:34:46.582050 containerd[1509]: time="2025-01-29T16:34:46.582043064Z" level=warning msg="cleaning up after shim disconnected" id=525037c7be0d6feef737f69109f6afbca951a96224fb7e43ec39e587e21187df namespace=k8s.io Jan 29 16:34:46.582050 containerd[1509]: time="2025-01-29T16:34:46.582057919Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:34:46.673790 containerd[1509]: time="2025-01-29T16:34:46.673099837Z" level=error msg="Failed to destroy network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:46.673790 containerd[1509]: time="2025-01-29T16:34:46.673590155Z" level=error msg="encountered an error cleaning up failed sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:46.676057 containerd[1509]: time="2025-01-29T16:34:46.673818409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:46.676137 kubelet[1921]: E0129 16:34:46.674114 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:46.676137 kubelet[1921]: E0129 16:34:46.674213 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:46.676137 kubelet[1921]: E0129 16:34:46.674246 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:46.676350 kubelet[1921]: E0129 16:34:46.674304 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:46.677406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a-shm.mount: Deactivated successfully. Jan 29 16:34:46.861020 kubelet[1921]: E0129 16:34:46.860941 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:47.058342 kubelet[1921]: I0129 16:34:47.058189 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a" Jan 29 16:34:47.059963 containerd[1509]: time="2025-01-29T16:34:47.059711944Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:47.063547 containerd[1509]: time="2025-01-29T16:34:47.060113484Z" level=info msg="Ensure that sandbox ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a in task-service has been cleanup successfully" Jan 29 16:34:47.063547 containerd[1509]: time="2025-01-29T16:34:47.062944825Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:47.063547 containerd[1509]: time="2025-01-29T16:34:47.063009651Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:47.065155 containerd[1509]: time="2025-01-29T16:34:47.065053373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:1,}" Jan 29 16:34:47.065654 systemd[1]: run-netns-cni\x2d41bae3e7\x2d9719\x2db483\x2d7b71\x2dc71cb685c455.mount: Deactivated successfully. Jan 29 16:34:47.069575 containerd[1509]: time="2025-01-29T16:34:47.069479667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 16:34:47.153955 containerd[1509]: time="2025-01-29T16:34:47.153877479Z" level=error msg="Failed to destroy network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:47.154403 containerd[1509]: time="2025-01-29T16:34:47.154357075Z" level=error msg="encountered an error cleaning up failed sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:47.154520 containerd[1509]: time="2025-01-29T16:34:47.154453146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:47.156878 kubelet[1921]: E0129 16:34:47.154737 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:47.156878 kubelet[1921]: E0129 16:34:47.154808 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:47.156878 kubelet[1921]: E0129 16:34:47.154873 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:47.157106 kubelet[1921]: E0129 16:34:47.154930 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:47.158191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2-shm.mount: Deactivated successfully. Jan 29 16:34:47.861587 kubelet[1921]: E0129 16:34:47.861529 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:48.076040 kubelet[1921]: I0129 16:34:48.075171 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2" Jan 29 16:34:48.076860 containerd[1509]: time="2025-01-29T16:34:48.076410565Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" Jan 29 16:34:48.076860 containerd[1509]: time="2025-01-29T16:34:48.076763959Z" level=info msg="Ensure that sandbox ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2 in task-service has been cleanup successfully" Jan 29 16:34:48.082097 containerd[1509]: time="2025-01-29T16:34:48.081937629Z" level=info msg="TearDown network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" successfully" Jan 29 16:34:48.082097 containerd[1509]: time="2025-01-29T16:34:48.081979249Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" returns successfully" Jan 29 16:34:48.082711 containerd[1509]: time="2025-01-29T16:34:48.082354431Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:48.082711 containerd[1509]: time="2025-01-29T16:34:48.082527385Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:48.082711 containerd[1509]: time="2025-01-29T16:34:48.082545419Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:48.084565 containerd[1509]: time="2025-01-29T16:34:48.084045523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:2,}" Jan 29 16:34:48.086456 systemd[1]: run-netns-cni\x2d4825a381\x2dd014\x2d65ba\x2dd5ac\x2dd751b8072a79.mount: Deactivated successfully. Jan 29 16:34:48.155642 systemd[1]: Created slice kubepods-besteffort-pod9417d7f3_11ad_4063_8c73_fced7d64fb93.slice - libcontainer container kubepods-besteffort-pod9417d7f3_11ad_4063_8c73_fced7d64fb93.slice. Jan 29 16:34:48.226434 containerd[1509]: time="2025-01-29T16:34:48.226361022Z" level=error msg="Failed to destroy network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:48.231003 containerd[1509]: time="2025-01-29T16:34:48.226863145Z" level=error msg="encountered an error cleaning up failed sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:48.231003 containerd[1509]: time="2025-01-29T16:34:48.226958378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:48.231147 kubelet[1921]: E0129 16:34:48.227241 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:48.231147 kubelet[1921]: E0129 16:34:48.227322 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:48.231147 kubelet[1921]: E0129 16:34:48.227353 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:48.231318 kubelet[1921]: E0129 16:34:48.227435 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:48.233457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544-shm.mount: Deactivated successfully. Jan 29 16:34:48.283173 kubelet[1921]: I0129 16:34:48.283110 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gct5w\" (UniqueName: \"kubernetes.io/projected/9417d7f3-11ad-4063-8c73-fced7d64fb93-kube-api-access-gct5w\") pod \"nginx-deployment-8587fbcb89-fvdts\" (UID: \"9417d7f3-11ad-4063-8c73-fced7d64fb93\") " pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:48.470121 containerd[1509]: time="2025-01-29T16:34:48.469959278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:0,}" Jan 29 16:34:48.600330 containerd[1509]: time="2025-01-29T16:34:48.599854379Z" level=error msg="Failed to destroy network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:48.600515 containerd[1509]: time="2025-01-29T16:34:48.600272679Z" level=error msg="encountered an error cleaning up failed sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:48.600706 containerd[1509]: time="2025-01-29T16:34:48.600642886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:48.601206 kubelet[1921]: E0129 16:34:48.601157 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:48.601352 kubelet[1921]: E0129 16:34:48.601239 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:48.601352 kubelet[1921]: E0129 16:34:48.601270 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:48.601352 kubelet[1921]: E0129 16:34:48.601327 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-fvdts" podUID="9417d7f3-11ad-4063-8c73-fced7d64fb93" Jan 29 16:34:48.862251 kubelet[1921]: E0129 16:34:48.862142 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:49.080883 kubelet[1921]: I0129 16:34:49.080460 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544" Jan 29 16:34:49.083018 kubelet[1921]: I0129 16:34:49.082913 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631" Jan 29 16:34:49.083935 containerd[1509]: time="2025-01-29T16:34:49.083425332Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\"" Jan 29 16:34:49.083935 containerd[1509]: time="2025-01-29T16:34:49.083723120Z" level=info msg="Ensure that sandbox 261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544 in task-service has been cleanup successfully" Jan 29 16:34:49.085756 containerd[1509]: time="2025-01-29T16:34:49.084905163Z" level=info msg="TearDown network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" successfully" Jan 29 16:34:49.085756 containerd[1509]: time="2025-01-29T16:34:49.084955668Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" returns successfully" Jan 29 16:34:49.085756 containerd[1509]: time="2025-01-29T16:34:49.085329865Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\"" Jan 29 16:34:49.085756 containerd[1509]: time="2025-01-29T16:34:49.085435861Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" Jan 29 16:34:49.085756 containerd[1509]: time="2025-01-29T16:34:49.085532677Z" level=info msg="TearDown network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" successfully" Jan 29 16:34:49.085756 containerd[1509]: time="2025-01-29T16:34:49.085549863Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" returns successfully" Jan 29 16:34:49.085756 containerd[1509]: time="2025-01-29T16:34:49.085572795Z" level=info msg="Ensure that sandbox 3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631 in task-service has been cleanup successfully" Jan 29 16:34:49.086172 containerd[1509]: time="2025-01-29T16:34:49.085764436Z" level=info msg="TearDown network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" successfully" Jan 29 16:34:49.086172 containerd[1509]: time="2025-01-29T16:34:49.085787129Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" returns successfully" Jan 29 16:34:49.088083 containerd[1509]: time="2025-01-29T16:34:49.086707898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:1,}" Jan 29 16:34:49.088083 containerd[1509]: time="2025-01-29T16:34:49.087139319Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:49.088083 containerd[1509]: time="2025-01-29T16:34:49.087254859Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:49.088083 containerd[1509]: time="2025-01-29T16:34:49.087275378Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:49.091002 containerd[1509]: time="2025-01-29T16:34:49.090500278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:3,}" Jan 29 16:34:49.094358 systemd[1]: run-netns-cni\x2de8bef075\x2dacb5\x2d2d30\x2dd072\x2d8a9a1b47a1b6.mount: Deactivated successfully. Jan 29 16:34:49.094513 systemd[1]: run-netns-cni\x2d019430c4\x2dbb3a\x2dba38\x2d208f\x2d619ff7663046.mount: Deactivated successfully. Jan 29 16:34:49.419111 containerd[1509]: time="2025-01-29T16:34:49.419041163Z" level=error msg="Failed to destroy network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:49.422192 containerd[1509]: time="2025-01-29T16:34:49.421731726Z" level=error msg="encountered an error cleaning up failed sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:49.422336 containerd[1509]: time="2025-01-29T16:34:49.421984871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:49.423218 kubelet[1921]: E0129 16:34:49.422641 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:49.423218 kubelet[1921]: E0129 16:34:49.422721 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:49.423218 kubelet[1921]: E0129 16:34:49.422763 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:49.423477 kubelet[1921]: E0129 16:34:49.422904 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-fvdts" podUID="9417d7f3-11ad-4063-8c73-fced7d64fb93" Jan 29 16:34:49.445440 containerd[1509]: time="2025-01-29T16:34:49.445390446Z" level=error msg="Failed to destroy network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:49.446251 containerd[1509]: time="2025-01-29T16:34:49.446191503Z" level=error msg="encountered an error cleaning up failed sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:49.446377 containerd[1509]: time="2025-01-29T16:34:49.446292975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:49.446705 kubelet[1921]: E0129 16:34:49.446653 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:49.446849 kubelet[1921]: E0129 16:34:49.446720 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:49.446849 kubelet[1921]: E0129 16:34:49.446755 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:49.447386 kubelet[1921]: E0129 16:34:49.447122 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:49.863397 kubelet[1921]: E0129 16:34:49.863299 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:50.087476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6-shm.mount: Deactivated successfully. Jan 29 16:34:50.093012 kubelet[1921]: I0129 16:34:50.092970 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6" Jan 29 16:34:50.094726 containerd[1509]: time="2025-01-29T16:34:50.094092309Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\"" Jan 29 16:34:50.094726 containerd[1509]: time="2025-01-29T16:34:50.094473363Z" level=info msg="Ensure that sandbox ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6 in task-service has been cleanup successfully" Jan 29 16:34:50.097386 containerd[1509]: time="2025-01-29T16:34:50.096999893Z" level=info msg="TearDown network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" successfully" Jan 29 16:34:50.097386 containerd[1509]: time="2025-01-29T16:34:50.097034036Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" returns successfully" Jan 29 16:34:50.098672 containerd[1509]: time="2025-01-29T16:34:50.098639651Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\"" Jan 29 16:34:50.099693 containerd[1509]: time="2025-01-29T16:34:50.099590215Z" level=info msg="TearDown network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" successfully" Jan 29 16:34:50.099693 containerd[1509]: time="2025-01-29T16:34:50.099619415Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" returns successfully" Jan 29 16:34:50.102346 systemd[1]: run-netns-cni\x2dfb9fdd92\x2d249b\x2d8ed9\x2dc662\x2dfa9235f4713b.mount: Deactivated successfully. Jan 29 16:34:50.105929 containerd[1509]: time="2025-01-29T16:34:50.105379585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:2,}" Jan 29 16:34:50.111244 kubelet[1921]: I0129 16:34:50.111200 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547" Jan 29 16:34:50.113008 containerd[1509]: time="2025-01-29T16:34:50.112950170Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\"" Jan 29 16:34:50.113369 containerd[1509]: time="2025-01-29T16:34:50.113230115Z" level=info msg="Ensure that sandbox 3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547 in task-service has been cleanup successfully" Jan 29 16:34:50.118364 containerd[1509]: time="2025-01-29T16:34:50.115306220Z" level=info msg="TearDown network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" successfully" Jan 29 16:34:50.118364 containerd[1509]: time="2025-01-29T16:34:50.115339030Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" returns successfully" Jan 29 16:34:50.119283 systemd[1]: run-netns-cni\x2d8cad2e25\x2d8093\x2d2ed9\x2d2576\x2d8f3e31d19975.mount: Deactivated successfully. Jan 29 16:34:50.124646 containerd[1509]: time="2025-01-29T16:34:50.124604817Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\"" Jan 29 16:34:50.124814 containerd[1509]: time="2025-01-29T16:34:50.124750291Z" level=info msg="TearDown network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" successfully" Jan 29 16:34:50.124814 containerd[1509]: time="2025-01-29T16:34:50.124768404Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" returns successfully" Jan 29 16:34:50.127675 containerd[1509]: time="2025-01-29T16:34:50.126274138Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" Jan 29 16:34:50.127675 containerd[1509]: time="2025-01-29T16:34:50.127304660Z" level=info msg="TearDown network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" successfully" Jan 29 16:34:50.127675 containerd[1509]: time="2025-01-29T16:34:50.127327968Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" returns successfully" Jan 29 16:34:50.128307 containerd[1509]: time="2025-01-29T16:34:50.128260608Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:50.128927 containerd[1509]: time="2025-01-29T16:34:50.128871999Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:50.128927 containerd[1509]: time="2025-01-29T16:34:50.128897490Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:50.134063 containerd[1509]: time="2025-01-29T16:34:50.133355467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:4,}" Jan 29 16:34:50.323664 containerd[1509]: time="2025-01-29T16:34:50.323601510Z" level=error msg="Failed to destroy network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:50.325074 containerd[1509]: time="2025-01-29T16:34:50.325022803Z" level=error msg="encountered an error cleaning up failed sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:50.326791 containerd[1509]: time="2025-01-29T16:34:50.325944287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:50.327399 kubelet[1921]: E0129 16:34:50.327349 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:50.327518 kubelet[1921]: E0129 16:34:50.327431 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:50.327518 kubelet[1921]: E0129 16:34:50.327462 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:50.327632 kubelet[1921]: E0129 16:34:50.327534 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-fvdts" podUID="9417d7f3-11ad-4063-8c73-fced7d64fb93" Jan 29 16:34:50.338010 containerd[1509]: time="2025-01-29T16:34:50.337594444Z" level=error msg="Failed to destroy network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:50.338226 containerd[1509]: time="2025-01-29T16:34:50.338187202Z" level=error msg="encountered an error cleaning up failed sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:50.338513 containerd[1509]: time="2025-01-29T16:34:50.338295076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:50.338681 kubelet[1921]: E0129 16:34:50.338621 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:50.338959 kubelet[1921]: E0129 16:34:50.338697 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:50.338959 kubelet[1921]: E0129 16:34:50.338727 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:50.338959 kubelet[1921]: E0129 16:34:50.338784 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:50.863920 kubelet[1921]: E0129 16:34:50.863862 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:51.085190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac-shm.mount: Deactivated successfully. Jan 29 16:34:51.118970 kubelet[1921]: I0129 16:34:51.118556 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78" Jan 29 16:34:51.120232 containerd[1509]: time="2025-01-29T16:34:51.119927700Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\"" Jan 29 16:34:51.122061 containerd[1509]: time="2025-01-29T16:34:51.120215873Z" level=info msg="Ensure that sandbox 445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78 in task-service has been cleanup successfully" Jan 29 16:34:51.122334 containerd[1509]: time="2025-01-29T16:34:51.122300935Z" level=info msg="TearDown network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" successfully" Jan 29 16:34:51.125047 kubelet[1921]: I0129 16:34:51.122955 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac" Jan 29 16:34:51.125646 containerd[1509]: time="2025-01-29T16:34:51.125276523Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" returns successfully" Jan 29 16:34:51.125646 containerd[1509]: time="2025-01-29T16:34:51.123553460Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\"" Jan 29 16:34:51.125646 containerd[1509]: time="2025-01-29T16:34:51.125636852Z" level=info msg="Ensure that sandbox c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac in task-service has been cleanup successfully" Jan 29 16:34:51.126949 systemd[1]: run-netns-cni\x2dfcda806f\x2d1378\x2dc22a\x2d359e\x2d34f22ea4b051.mount: Deactivated successfully. Jan 29 16:34:51.129115 containerd[1509]: time="2025-01-29T16:34:51.129077495Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\"" Jan 29 16:34:51.129239 containerd[1509]: time="2025-01-29T16:34:51.129220680Z" level=info msg="TearDown network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" successfully" Jan 29 16:34:51.129297 containerd[1509]: time="2025-01-29T16:34:51.129240866Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" returns successfully" Jan 29 16:34:51.131958 containerd[1509]: time="2025-01-29T16:34:51.129550339Z" level=info msg="TearDown network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" successfully" Jan 29 16:34:51.131958 containerd[1509]: time="2025-01-29T16:34:51.129576120Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" returns successfully" Jan 29 16:34:51.131958 containerd[1509]: time="2025-01-29T16:34:51.130258262Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\"" Jan 29 16:34:51.131958 containerd[1509]: time="2025-01-29T16:34:51.130377876Z" level=info msg="TearDown network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" successfully" Jan 29 16:34:51.131958 containerd[1509]: time="2025-01-29T16:34:51.130396083Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" returns successfully" Jan 29 16:34:51.132290 containerd[1509]: time="2025-01-29T16:34:51.132150083Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\"" Jan 29 16:34:51.132290 containerd[1509]: time="2025-01-29T16:34:51.132278450Z" level=info msg="TearDown network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" successfully" Jan 29 16:34:51.132384 containerd[1509]: time="2025-01-29T16:34:51.132295730Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" returns successfully" Jan 29 16:34:51.132433 containerd[1509]: time="2025-01-29T16:34:51.132382053Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\"" Jan 29 16:34:51.135409 containerd[1509]: time="2025-01-29T16:34:51.132491350Z" level=info msg="TearDown network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" successfully" Jan 29 16:34:51.135409 containerd[1509]: time="2025-01-29T16:34:51.132511806Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" returns successfully" Jan 29 16:34:51.135409 containerd[1509]: time="2025-01-29T16:34:51.134803681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:3,}" Jan 29 16:34:51.134367 systemd[1]: run-netns-cni\x2d122f561a\x2d7fd1\x2dc45f\x2dbc65\x2d57f4e2844254.mount: Deactivated successfully. Jan 29 16:34:51.135764 containerd[1509]: time="2025-01-29T16:34:51.135611317Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" Jan 29 16:34:51.135764 containerd[1509]: time="2025-01-29T16:34:51.135740290Z" level=info msg="TearDown network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" successfully" Jan 29 16:34:51.135764 containerd[1509]: time="2025-01-29T16:34:51.135760568Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" returns successfully" Jan 29 16:34:51.137110 containerd[1509]: time="2025-01-29T16:34:51.136271759Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:51.137110 containerd[1509]: time="2025-01-29T16:34:51.136385113Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:51.137110 containerd[1509]: time="2025-01-29T16:34:51.136402423Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:51.137253 containerd[1509]: time="2025-01-29T16:34:51.137119795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:5,}" Jan 29 16:34:51.356521 containerd[1509]: time="2025-01-29T16:34:51.355761796Z" level=error msg="Failed to destroy network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:51.359561 containerd[1509]: time="2025-01-29T16:34:51.359492835Z" level=error msg="encountered an error cleaning up failed sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:51.359759 containerd[1509]: time="2025-01-29T16:34:51.359611061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:51.361843 kubelet[1921]: E0129 16:34:51.361351 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:51.361843 kubelet[1921]: E0129 16:34:51.361431 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:51.361843 kubelet[1921]: E0129 16:34:51.361470 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:51.362096 kubelet[1921]: E0129 16:34:51.361526 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:51.367966 containerd[1509]: time="2025-01-29T16:34:51.367568759Z" level=error msg="Failed to destroy network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:51.369177 containerd[1509]: time="2025-01-29T16:34:51.369052999Z" level=error msg="encountered an error cleaning up failed sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:51.370675 containerd[1509]: time="2025-01-29T16:34:51.369979267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:51.371482 kubelet[1921]: E0129 16:34:51.371430 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:51.371630 kubelet[1921]: E0129 16:34:51.371516 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:51.371630 kubelet[1921]: E0129 16:34:51.371554 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:51.371741 kubelet[1921]: E0129 16:34:51.371649 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-fvdts" podUID="9417d7f3-11ad-4063-8c73-fced7d64fb93" Jan 29 16:34:51.865344 kubelet[1921]: E0129 16:34:51.865296 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:52.088301 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8-shm.mount: Deactivated successfully. Jan 29 16:34:52.133121 kubelet[1921]: I0129 16:34:52.131958 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf" Jan 29 16:34:52.133486 containerd[1509]: time="2025-01-29T16:34:52.133447098Z" level=info msg="StopPodSandbox for \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\"" Jan 29 16:34:52.134284 containerd[1509]: time="2025-01-29T16:34:52.134251639Z" level=info msg="Ensure that sandbox f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf in task-service has been cleanup successfully" Jan 29 16:34:52.134673 containerd[1509]: time="2025-01-29T16:34:52.134628841Z" level=info msg="TearDown network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\" successfully" Jan 29 16:34:52.134795 containerd[1509]: time="2025-01-29T16:34:52.134775502Z" level=info msg="StopPodSandbox for \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\" returns successfully" Jan 29 16:34:52.138574 containerd[1509]: time="2025-01-29T16:34:52.138301317Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\"" Jan 29 16:34:52.139286 containerd[1509]: time="2025-01-29T16:34:52.139248240Z" level=info msg="TearDown network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" successfully" Jan 29 16:34:52.139286 containerd[1509]: time="2025-01-29T16:34:52.139283260Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" returns successfully" Jan 29 16:34:52.140416 systemd[1]: run-netns-cni\x2d46cb6ba2\x2d5e15\x2d05e7\x2d0501\x2d109e3f4f102d.mount: Deactivated successfully. Jan 29 16:34:52.143417 containerd[1509]: time="2025-01-29T16:34:52.143357091Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\"" Jan 29 16:34:52.144007 containerd[1509]: time="2025-01-29T16:34:52.143484327Z" level=info msg="TearDown network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" successfully" Jan 29 16:34:52.144007 containerd[1509]: time="2025-01-29T16:34:52.143558639Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" returns successfully" Jan 29 16:34:52.144988 containerd[1509]: time="2025-01-29T16:34:52.144940957Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\"" Jan 29 16:34:52.145088 containerd[1509]: time="2025-01-29T16:34:52.145063212Z" level=info msg="TearDown network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" successfully" Jan 29 16:34:52.145141 containerd[1509]: time="2025-01-29T16:34:52.145086820Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" returns successfully" Jan 29 16:34:52.146798 containerd[1509]: time="2025-01-29T16:34:52.146761183Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" Jan 29 16:34:52.146923 containerd[1509]: time="2025-01-29T16:34:52.146896996Z" level=info msg="TearDown network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" successfully" Jan 29 16:34:52.146994 containerd[1509]: time="2025-01-29T16:34:52.146921558Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" returns successfully" Jan 29 16:34:52.148122 containerd[1509]: time="2025-01-29T16:34:52.148028423Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:52.148289 containerd[1509]: time="2025-01-29T16:34:52.148228294Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:52.148289 containerd[1509]: time="2025-01-29T16:34:52.148249821Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:52.150556 kubelet[1921]: I0129 16:34:52.150468 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8" Jan 29 16:34:52.153867 containerd[1509]: time="2025-01-29T16:34:52.153630243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:6,}" Jan 29 16:34:52.154278 containerd[1509]: time="2025-01-29T16:34:52.154214996Z" level=info msg="StopPodSandbox for \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\"" Jan 29 16:34:52.155951 containerd[1509]: time="2025-01-29T16:34:52.154622599Z" level=info msg="Ensure that sandbox a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8 in task-service has been cleanup successfully" Jan 29 16:34:52.155951 containerd[1509]: time="2025-01-29T16:34:52.155061962Z" level=info msg="TearDown network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\" successfully" Jan 29 16:34:52.155951 containerd[1509]: time="2025-01-29T16:34:52.155090588Z" level=info msg="StopPodSandbox for \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\" returns successfully" Jan 29 16:34:52.159493 systemd[1]: run-netns-cni\x2dab3b9c32\x2dae37\x2d6ab3\x2da35b\x2d46f8fd831d97.mount: Deactivated successfully. Jan 29 16:34:52.163858 containerd[1509]: time="2025-01-29T16:34:52.162051853Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\"" Jan 29 16:34:52.163858 containerd[1509]: time="2025-01-29T16:34:52.162180680Z" level=info msg="TearDown network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" successfully" Jan 29 16:34:52.163858 containerd[1509]: time="2025-01-29T16:34:52.162200532Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" returns successfully" Jan 29 16:34:52.163858 containerd[1509]: time="2025-01-29T16:34:52.163655730Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\"" Jan 29 16:34:52.163858 containerd[1509]: time="2025-01-29T16:34:52.163772431Z" level=info msg="TearDown network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" successfully" Jan 29 16:34:52.163858 containerd[1509]: time="2025-01-29T16:34:52.163790668Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" returns successfully" Jan 29 16:34:52.166019 containerd[1509]: time="2025-01-29T16:34:52.164809839Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\"" Jan 29 16:34:52.166019 containerd[1509]: time="2025-01-29T16:34:52.164946045Z" level=info msg="TearDown network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" successfully" Jan 29 16:34:52.166019 containerd[1509]: time="2025-01-29T16:34:52.164966810Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" returns successfully" Jan 29 16:34:52.166514 containerd[1509]: time="2025-01-29T16:34:52.166482295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:4,}" Jan 29 16:34:52.354630 containerd[1509]: time="2025-01-29T16:34:52.354558305Z" level=error msg="Failed to destroy network for sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:52.356412 containerd[1509]: time="2025-01-29T16:34:52.356355498Z" level=error msg="encountered an error cleaning up failed sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:52.356688 containerd[1509]: time="2025-01-29T16:34:52.356636141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:52.357212 kubelet[1921]: E0129 16:34:52.357163 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:52.357456 kubelet[1921]: E0129 16:34:52.357371 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:52.357456 kubelet[1921]: E0129 16:34:52.357412 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:52.358199 kubelet[1921]: E0129 16:34:52.358125 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:52.405753 containerd[1509]: time="2025-01-29T16:34:52.405508425Z" level=error msg="Failed to destroy network for sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:52.407984 containerd[1509]: time="2025-01-29T16:34:52.407908749Z" level=error msg="encountered an error cleaning up failed sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:52.408278 containerd[1509]: time="2025-01-29T16:34:52.408237868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:52.409370 kubelet[1921]: E0129 16:34:52.408961 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:52.409370 kubelet[1921]: E0129 16:34:52.409034 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:52.409370 kubelet[1921]: E0129 16:34:52.409069 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:52.409577 kubelet[1921]: E0129 16:34:52.409127 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-fvdts" podUID="9417d7f3-11ad-4063-8c73-fced7d64fb93" Jan 29 16:34:52.866047 kubelet[1921]: E0129 16:34:52.865958 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:53.087057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f-shm.mount: Deactivated successfully. Jan 29 16:34:53.159107 kubelet[1921]: I0129 16:34:53.158787 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f" Jan 29 16:34:53.160652 containerd[1509]: time="2025-01-29T16:34:53.160169892Z" level=info msg="StopPodSandbox for \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\"" Jan 29 16:34:53.160652 containerd[1509]: time="2025-01-29T16:34:53.160459308Z" level=info msg="Ensure that sandbox 63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f in task-service has been cleanup successfully" Jan 29 16:34:53.161882 containerd[1509]: time="2025-01-29T16:34:53.161849741Z" level=info msg="TearDown network for sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\" successfully" Jan 29 16:34:53.162096 containerd[1509]: time="2025-01-29T16:34:53.162004455Z" level=info msg="StopPodSandbox for \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\" returns successfully" Jan 29 16:34:53.164441 containerd[1509]: time="2025-01-29T16:34:53.164070567Z" level=info msg="StopPodSandbox for \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\"" Jan 29 16:34:53.164441 containerd[1509]: time="2025-01-29T16:34:53.164195672Z" level=info msg="TearDown network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\" successfully" Jan 29 16:34:53.164441 containerd[1509]: time="2025-01-29T16:34:53.164216354Z" level=info msg="StopPodSandbox for \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\" returns successfully" Jan 29 16:34:53.165282 systemd[1]: run-netns-cni\x2d618cedc8\x2d6071\x2d4353\x2dae1f\x2d4a18087151bf.mount: Deactivated successfully. Jan 29 16:34:53.167142 containerd[1509]: time="2025-01-29T16:34:53.167021162Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\"" Jan 29 16:34:53.167456 containerd[1509]: time="2025-01-29T16:34:53.167409595Z" level=info msg="TearDown network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" successfully" Jan 29 16:34:53.167671 containerd[1509]: time="2025-01-29T16:34:53.167648982Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" returns successfully" Jan 29 16:34:53.168228 containerd[1509]: time="2025-01-29T16:34:53.168191559Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\"" Jan 29 16:34:53.168515 containerd[1509]: time="2025-01-29T16:34:53.168446611Z" level=info msg="TearDown network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" successfully" Jan 29 16:34:53.168515 containerd[1509]: time="2025-01-29T16:34:53.168492270Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" returns successfully" Jan 29 16:34:53.169659 containerd[1509]: time="2025-01-29T16:34:53.169610427Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\"" Jan 29 16:34:53.169759 containerd[1509]: time="2025-01-29T16:34:53.169722335Z" level=info msg="TearDown network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" successfully" Jan 29 16:34:53.169759 containerd[1509]: time="2025-01-29T16:34:53.169741590Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" returns successfully" Jan 29 16:34:53.170735 containerd[1509]: time="2025-01-29T16:34:53.170377678Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" Jan 29 16:34:53.170735 containerd[1509]: time="2025-01-29T16:34:53.170502486Z" level=info msg="TearDown network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" successfully" Jan 29 16:34:53.170735 containerd[1509]: time="2025-01-29T16:34:53.170522956Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" returns successfully" Jan 29 16:34:53.170934 kubelet[1921]: I0129 16:34:53.170751 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f" Jan 29 16:34:53.172341 containerd[1509]: time="2025-01-29T16:34:53.171554861Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:53.172341 containerd[1509]: time="2025-01-29T16:34:53.171943414Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:53.172845 containerd[1509]: time="2025-01-29T16:34:53.171966044Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:53.173201 containerd[1509]: time="2025-01-29T16:34:53.173061870Z" level=info msg="StopPodSandbox for \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\"" Jan 29 16:34:53.173668 containerd[1509]: time="2025-01-29T16:34:53.173639166Z" level=info msg="Ensure that sandbox bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f in task-service has been cleanup successfully" Jan 29 16:34:53.176849 containerd[1509]: time="2025-01-29T16:34:53.174592594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:7,}" Jan 29 16:34:53.177557 containerd[1509]: time="2025-01-29T16:34:53.177014749Z" level=info msg="TearDown network for sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\" successfully" Jan 29 16:34:53.177557 containerd[1509]: time="2025-01-29T16:34:53.177409393Z" level=info msg="StopPodSandbox for \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\" returns successfully" Jan 29 16:34:53.178519 systemd[1]: run-netns-cni\x2d7fbc1d27\x2d5075\x2d8ff5\x2ddaae\x2deb0cc598eb75.mount: Deactivated successfully. Jan 29 16:34:53.180697 containerd[1509]: time="2025-01-29T16:34:53.180666657Z" level=info msg="StopPodSandbox for \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\"" Jan 29 16:34:53.181075 containerd[1509]: time="2025-01-29T16:34:53.180965844Z" level=info msg="TearDown network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\" successfully" Jan 29 16:34:53.181075 containerd[1509]: time="2025-01-29T16:34:53.180991583Z" level=info msg="StopPodSandbox for \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\" returns successfully" Jan 29 16:34:53.182854 containerd[1509]: time="2025-01-29T16:34:53.182645480Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\"" Jan 29 16:34:53.182854 containerd[1509]: time="2025-01-29T16:34:53.182764594Z" level=info msg="TearDown network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" successfully" Jan 29 16:34:53.182854 containerd[1509]: time="2025-01-29T16:34:53.182783344Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" returns successfully" Jan 29 16:34:53.183765 containerd[1509]: time="2025-01-29T16:34:53.183605585Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\"" Jan 29 16:34:53.185177 containerd[1509]: time="2025-01-29T16:34:53.184113364Z" level=info msg="TearDown network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" successfully" Jan 29 16:34:53.185177 containerd[1509]: time="2025-01-29T16:34:53.184143873Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" returns successfully" Jan 29 16:34:53.185979 containerd[1509]: time="2025-01-29T16:34:53.185922322Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\"" Jan 29 16:34:53.186144 containerd[1509]: time="2025-01-29T16:34:53.186095642Z" level=info msg="TearDown network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" successfully" Jan 29 16:34:53.186204 containerd[1509]: time="2025-01-29T16:34:53.186144174Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" returns successfully" Jan 29 16:34:53.187241 containerd[1509]: time="2025-01-29T16:34:53.187205544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:5,}" Jan 29 16:34:53.383884 containerd[1509]: time="2025-01-29T16:34:53.383779683Z" level=error msg="Failed to destroy network for sandbox \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:53.384530 containerd[1509]: time="2025-01-29T16:34:53.384481842Z" level=error msg="encountered an error cleaning up failed sandbox \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:53.384871 containerd[1509]: time="2025-01-29T16:34:53.384801304Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:53.385253 kubelet[1921]: E0129 16:34:53.385209 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:53.385355 kubelet[1921]: E0129 16:34:53.385288 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:53.385355 kubelet[1921]: E0129 16:34:53.385324 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:53.385446 kubelet[1921]: E0129 16:34:53.385382 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-fvdts" podUID="9417d7f3-11ad-4063-8c73-fced7d64fb93" Jan 29 16:34:53.399505 containerd[1509]: time="2025-01-29T16:34:53.399420756Z" level=error msg="Failed to destroy network for sandbox \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:53.400410 containerd[1509]: time="2025-01-29T16:34:53.400282699Z" level=error msg="encountered an error cleaning up failed sandbox \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:53.400777 containerd[1509]: time="2025-01-29T16:34:53.400454385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:53.400899 kubelet[1921]: E0129 16:34:53.400733 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:53.400899 kubelet[1921]: E0129 16:34:53.400804 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:53.400899 kubelet[1921]: E0129 16:34:53.400864 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:53.401078 kubelet[1921]: E0129 16:34:53.400921 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:53.866489 kubelet[1921]: E0129 16:34:53.866269 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:54.088066 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35-shm.mount: Deactivated successfully. Jan 29 16:34:54.178518 kubelet[1921]: I0129 16:34:54.177876 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35" Jan 29 16:34:54.179348 containerd[1509]: time="2025-01-29T16:34:54.178863634Z" level=info msg="StopPodSandbox for \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\"" Jan 29 16:34:54.179348 containerd[1509]: time="2025-01-29T16:34:54.179213252Z" level=info msg="Ensure that sandbox 45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35 in task-service has been cleanup successfully" Jan 29 16:34:54.183429 containerd[1509]: time="2025-01-29T16:34:54.182940079Z" level=info msg="TearDown network for sandbox \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\" successfully" Jan 29 16:34:54.183429 containerd[1509]: time="2025-01-29T16:34:54.182978719Z" level=info msg="StopPodSandbox for \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\" returns successfully" Jan 29 16:34:54.183981 containerd[1509]: time="2025-01-29T16:34:54.183949223Z" level=info msg="StopPodSandbox for \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\"" Jan 29 16:34:54.184321 containerd[1509]: time="2025-01-29T16:34:54.184077121Z" level=info msg="TearDown network for sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\" successfully" Jan 29 16:34:54.184321 containerd[1509]: time="2025-01-29T16:34:54.184098592Z" level=info msg="StopPodSandbox for \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\" returns successfully" Jan 29 16:34:54.185774 systemd[1]: run-netns-cni\x2de53bf611\x2da018\x2dcf5f\x2de5a2\x2d5c18ccaacc2e.mount: Deactivated successfully. Jan 29 16:34:54.187283 containerd[1509]: time="2025-01-29T16:34:54.186585110Z" level=info msg="StopPodSandbox for \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\"" Jan 29 16:34:54.187283 containerd[1509]: time="2025-01-29T16:34:54.186702901Z" level=info msg="TearDown network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\" successfully" Jan 29 16:34:54.187283 containerd[1509]: time="2025-01-29T16:34:54.186721426Z" level=info msg="StopPodSandbox for \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\" returns successfully" Jan 29 16:34:54.190973 containerd[1509]: time="2025-01-29T16:34:54.190920188Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\"" Jan 29 16:34:54.191076 containerd[1509]: time="2025-01-29T16:34:54.191039915Z" level=info msg="TearDown network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" successfully" Jan 29 16:34:54.191076 containerd[1509]: time="2025-01-29T16:34:54.191059911Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" returns successfully" Jan 29 16:34:54.192752 containerd[1509]: time="2025-01-29T16:34:54.191941301Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\"" Jan 29 16:34:54.192752 containerd[1509]: time="2025-01-29T16:34:54.192058506Z" level=info msg="TearDown network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" successfully" Jan 29 16:34:54.192752 containerd[1509]: time="2025-01-29T16:34:54.192123350Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" returns successfully" Jan 29 16:34:54.193754 containerd[1509]: time="2025-01-29T16:34:54.193710187Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\"" Jan 29 16:34:54.193932 containerd[1509]: time="2025-01-29T16:34:54.193815118Z" level=info msg="TearDown network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" successfully" Jan 29 16:34:54.193932 containerd[1509]: time="2025-01-29T16:34:54.193847623Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" returns successfully" Jan 29 16:34:54.195034 containerd[1509]: time="2025-01-29T16:34:54.195004586Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" Jan 29 16:34:54.195142 containerd[1509]: time="2025-01-29T16:34:54.195122635Z" level=info msg="TearDown network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" successfully" Jan 29 16:34:54.195193 containerd[1509]: time="2025-01-29T16:34:54.195142366Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" returns successfully" Jan 29 16:34:54.195892 containerd[1509]: time="2025-01-29T16:34:54.195858054Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:54.196010 containerd[1509]: time="2025-01-29T16:34:54.195984349Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:54.196071 containerd[1509]: time="2025-01-29T16:34:54.196008375Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:54.196816 kubelet[1921]: I0129 16:34:54.196757 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb" Jan 29 16:34:54.197862 containerd[1509]: time="2025-01-29T16:34:54.197590023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:8,}" Jan 29 16:34:54.198541 containerd[1509]: time="2025-01-29T16:34:54.198499848Z" level=info msg="StopPodSandbox for \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\"" Jan 29 16:34:54.199473 containerd[1509]: time="2025-01-29T16:34:54.199421413Z" level=info msg="Ensure that sandbox e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb in task-service has been cleanup successfully" Jan 29 16:34:54.200216 containerd[1509]: time="2025-01-29T16:34:54.199812989Z" level=info msg="TearDown network for sandbox \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\" successfully" Jan 29 16:34:54.200216 containerd[1509]: time="2025-01-29T16:34:54.199863863Z" level=info msg="StopPodSandbox for \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\" returns successfully" Jan 29 16:34:54.205529 containerd[1509]: time="2025-01-29T16:34:54.202265694Z" level=info msg="StopPodSandbox for \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\"" Jan 29 16:34:54.205529 containerd[1509]: time="2025-01-29T16:34:54.202376589Z" level=info msg="TearDown network for sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\" successfully" Jan 29 16:34:54.205529 containerd[1509]: time="2025-01-29T16:34:54.202395083Z" level=info msg="StopPodSandbox for \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\" returns successfully" Jan 29 16:34:54.205529 containerd[1509]: time="2025-01-29T16:34:54.203062203Z" level=info msg="StopPodSandbox for \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\"" Jan 29 16:34:54.205529 containerd[1509]: time="2025-01-29T16:34:54.203171187Z" level=info msg="TearDown network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\" successfully" Jan 29 16:34:54.205529 containerd[1509]: time="2025-01-29T16:34:54.203201909Z" level=info msg="StopPodSandbox for \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\" returns successfully" Jan 29 16:34:54.204502 systemd[1]: run-netns-cni\x2d1ceacf0e\x2d5b9e\x2dfb55\x2d31b0\x2db1857e44d472.mount: Deactivated successfully. Jan 29 16:34:54.206743 containerd[1509]: time="2025-01-29T16:34:54.206710008Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\"" Jan 29 16:34:54.206915 containerd[1509]: time="2025-01-29T16:34:54.206871279Z" level=info msg="TearDown network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" successfully" Jan 29 16:34:54.206915 containerd[1509]: time="2025-01-29T16:34:54.206897162Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" returns successfully" Jan 29 16:34:54.208324 containerd[1509]: time="2025-01-29T16:34:54.208285294Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\"" Jan 29 16:34:54.208499 containerd[1509]: time="2025-01-29T16:34:54.208451472Z" level=info msg="TearDown network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" successfully" Jan 29 16:34:54.208499 containerd[1509]: time="2025-01-29T16:34:54.208477324Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" returns successfully" Jan 29 16:34:54.210345 containerd[1509]: time="2025-01-29T16:34:54.210303369Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\"" Jan 29 16:34:54.210455 containerd[1509]: time="2025-01-29T16:34:54.210420976Z" level=info msg="TearDown network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" successfully" Jan 29 16:34:54.210455 containerd[1509]: time="2025-01-29T16:34:54.210442074Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" returns successfully" Jan 29 16:34:54.214815 containerd[1509]: time="2025-01-29T16:34:54.214759841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:6,}" Jan 29 16:34:54.398453 containerd[1509]: time="2025-01-29T16:34:54.398340151Z" level=error msg="Failed to destroy network for sandbox \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:54.400261 containerd[1509]: time="2025-01-29T16:34:54.399970741Z" level=error msg="encountered an error cleaning up failed sandbox \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:54.400261 containerd[1509]: time="2025-01-29T16:34:54.400087372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:54.401253 kubelet[1921]: E0129 16:34:54.400730 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:54.401253 kubelet[1921]: E0129 16:34:54.400810 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:54.401253 kubelet[1921]: E0129 16:34:54.400964 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-fvdts" Jan 29 16:34:54.401994 kubelet[1921]: E0129 16:34:54.401536 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-fvdts_default(9417d7f3-11ad-4063-8c73-fced7d64fb93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-fvdts" podUID="9417d7f3-11ad-4063-8c73-fced7d64fb93" Jan 29 16:34:54.436319 containerd[1509]: time="2025-01-29T16:34:54.436086571Z" level=error msg="Failed to destroy network for sandbox \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:54.437285 containerd[1509]: time="2025-01-29T16:34:54.437234089Z" level=error msg="encountered an error cleaning up failed sandbox \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:54.437455 containerd[1509]: time="2025-01-29T16:34:54.437332490Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:54.438466 kubelet[1921]: E0129 16:34:54.437675 1921 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:34:54.438466 kubelet[1921]: E0129 16:34:54.437763 1921 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:54.438466 kubelet[1921]: E0129 16:34:54.437813 1921 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xz4l2" Jan 29 16:34:54.438643 kubelet[1921]: E0129 16:34:54.437926 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xz4l2_calico-system(ec43a3f5-5f5f-4f82-a768-b19afc7730bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xz4l2" podUID="ec43a3f5-5f5f-4f82-a768-b19afc7730bd" Jan 29 16:34:54.524882 containerd[1509]: time="2025-01-29T16:34:54.524796619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:54.526362 containerd[1509]: time="2025-01-29T16:34:54.526288732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 16:34:54.528434 containerd[1509]: time="2025-01-29T16:34:54.528374710Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:54.531396 containerd[1509]: time="2025-01-29T16:34:54.531306355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:34:54.532468 containerd[1509]: time="2025-01-29T16:34:54.532273888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.462745962s" Jan 29 16:34:54.532468 containerd[1509]: time="2025-01-29T16:34:54.532322648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 16:34:54.543474 containerd[1509]: time="2025-01-29T16:34:54.543406065Z" level=info msg="CreateContainer within sandbox \"25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 16:34:54.564547 containerd[1509]: time="2025-01-29T16:34:54.564424323Z" level=info msg="CreateContainer within sandbox \"25ab7afba649a4bf4e954dd6f7b9a7665986d88b2d11e2cd04e6c41cc6595c24\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d0ee69af6bb1185ef035d175062639ea2c4694cedff6ce47244b96fbba3f5b2c\"" Jan 29 16:34:54.565367 containerd[1509]: time="2025-01-29T16:34:54.565330916Z" level=info msg="StartContainer for \"d0ee69af6bb1185ef035d175062639ea2c4694cedff6ce47244b96fbba3f5b2c\"" Jan 29 16:34:54.604077 systemd[1]: Started cri-containerd-d0ee69af6bb1185ef035d175062639ea2c4694cedff6ce47244b96fbba3f5b2c.scope - libcontainer container d0ee69af6bb1185ef035d175062639ea2c4694cedff6ce47244b96fbba3f5b2c. Jan 29 16:34:54.649415 containerd[1509]: time="2025-01-29T16:34:54.649350078Z" level=info msg="StartContainer for \"d0ee69af6bb1185ef035d175062639ea2c4694cedff6ce47244b96fbba3f5b2c\" returns successfully" Jan 29 16:34:54.751809 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 16:34:54.751982 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 16:34:54.852257 kubelet[1921]: E0129 16:34:54.852199 1921 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:54.867287 kubelet[1921]: E0129 16:34:54.867222 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:55.088992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516-shm.mount: Deactivated successfully. Jan 29 16:34:55.089143 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a-shm.mount: Deactivated successfully. Jan 29 16:34:55.089255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3842411644.mount: Deactivated successfully. Jan 29 16:34:55.204292 kubelet[1921]: I0129 16:34:55.204112 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a" Jan 29 16:34:55.205807 containerd[1509]: time="2025-01-29T16:34:55.205100759Z" level=info msg="StopPodSandbox for \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\"" Jan 29 16:34:55.205807 containerd[1509]: time="2025-01-29T16:34:55.205408038Z" level=info msg="Ensure that sandbox 569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a in task-service has been cleanup successfully" Jan 29 16:34:55.209376 containerd[1509]: time="2025-01-29T16:34:55.207044869Z" level=info msg="TearDown network for sandbox \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\" successfully" Jan 29 16:34:55.209376 containerd[1509]: time="2025-01-29T16:34:55.207282609Z" level=info msg="StopPodSandbox for \"569ee70aa5474f3a0bc9a9c7125be48efe83bc6a36b5549bc7db5d0bfcacfa9a\" returns successfully" Jan 29 16:34:55.210865 containerd[1509]: time="2025-01-29T16:34:55.209676139Z" level=info msg="StopPodSandbox for \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\"" Jan 29 16:34:55.210865 containerd[1509]: time="2025-01-29T16:34:55.210024443Z" level=info msg="TearDown network for sandbox \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\" successfully" Jan 29 16:34:55.210865 containerd[1509]: time="2025-01-29T16:34:55.210046632Z" level=info msg="StopPodSandbox for \"45a10950ca6148ae2fa59e4a7fd319f5c8db4e347e576d1b5ba9e41cf18bde35\" returns successfully" Jan 29 16:34:55.211633 containerd[1509]: time="2025-01-29T16:34:55.211602104Z" level=info msg="StopPodSandbox for \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\"" Jan 29 16:34:55.211737 containerd[1509]: time="2025-01-29T16:34:55.211720922Z" level=info msg="TearDown network for sandbox \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\" successfully" Jan 29 16:34:55.211796 containerd[1509]: time="2025-01-29T16:34:55.211739615Z" level=info msg="StopPodSandbox for \"63ce302335998e4e0f59290ef64eb94094618e34084ab9eea7724df7589bde6f\" returns successfully" Jan 29 16:34:55.212058 systemd[1]: run-netns-cni\x2d23cb198f\x2dfa0f\x2dc1a8\x2d6dbf\x2dbe0c43965269.mount: Deactivated successfully. Jan 29 16:34:55.213543 containerd[1509]: time="2025-01-29T16:34:55.213506709Z" level=info msg="StopPodSandbox for \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\"" Jan 29 16:34:55.213654 containerd[1509]: time="2025-01-29T16:34:55.213624818Z" level=info msg="TearDown network for sandbox \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\" successfully" Jan 29 16:34:55.213711 containerd[1509]: time="2025-01-29T16:34:55.213650767Z" level=info msg="StopPodSandbox for \"f822e2f0bba6913e0e7e23e895fa3c9c467dd467fc8d76c0d836dee8ae8d3dbf\" returns successfully" Jan 29 16:34:55.215430 containerd[1509]: time="2025-01-29T16:34:55.214910397Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\"" Jan 29 16:34:55.215430 containerd[1509]: time="2025-01-29T16:34:55.215022591Z" level=info msg="TearDown network for sandbox \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" successfully" Jan 29 16:34:55.215430 containerd[1509]: time="2025-01-29T16:34:55.215041559Z" level=info msg="StopPodSandbox for \"445fe3758c84b0b2ba1be03bbdcb85be8efc0c24b7aa4bb454e3b4ca791b2e78\" returns successfully" Jan 29 16:34:55.215752 containerd[1509]: time="2025-01-29T16:34:55.215692467Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\"" Jan 29 16:34:55.215938 containerd[1509]: time="2025-01-29T16:34:55.215852940Z" level=info msg="TearDown network for sandbox \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" successfully" Jan 29 16:34:55.215938 containerd[1509]: time="2025-01-29T16:34:55.215875592Z" level=info msg="StopPodSandbox for \"3fd4b0454b3ca7b3a42aed5eefc615c49344d14fe0358ff652508feae7a57547\" returns successfully" Jan 29 16:34:55.216461 kubelet[1921]: I0129 16:34:55.216430 1921 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516" Jan 29 16:34:55.217282 containerd[1509]: time="2025-01-29T16:34:55.217039620Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\"" Jan 29 16:34:55.217282 containerd[1509]: time="2025-01-29T16:34:55.217221709Z" level=info msg="TearDown network for sandbox \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" successfully" Jan 29 16:34:55.217693 containerd[1509]: time="2025-01-29T16:34:55.217290869Z" level=info msg="StopPodSandbox for \"261d1e51c8492a8e0f2b025a0187e6750d96cc8f70602503bb23de0774d94544\" returns successfully" Jan 29 16:34:55.218359 containerd[1509]: time="2025-01-29T16:34:55.218110004Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\"" Jan 29 16:34:55.218359 containerd[1509]: time="2025-01-29T16:34:55.218196456Z" level=info msg="StopPodSandbox for \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\"" Jan 29 16:34:55.218359 containerd[1509]: time="2025-01-29T16:34:55.218225279Z" level=info msg="TearDown network for sandbox \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" successfully" Jan 29 16:34:55.218359 containerd[1509]: time="2025-01-29T16:34:55.218243755Z" level=info msg="StopPodSandbox for \"ee712bdfe555614259ac0605d5713cf76391b2fdafe95c44c8d131cdce33ecf2\" returns successfully" Jan 29 16:34:55.219647 containerd[1509]: time="2025-01-29T16:34:55.219309310Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\"" Jan 29 16:34:55.219647 containerd[1509]: time="2025-01-29T16:34:55.219405177Z" level=info msg="Ensure that sandbox 44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516 in task-service has been cleanup successfully" Jan 29 16:34:55.219647 containerd[1509]: time="2025-01-29T16:34:55.219452425Z" level=info msg="TearDown network for sandbox \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" successfully" Jan 29 16:34:55.219647 containerd[1509]: time="2025-01-29T16:34:55.219470369Z" level=info msg="StopPodSandbox for \"ffe19bc1f9340fc35ee28f491362ebccd8f505bb8c0468c4e0fd0e4d1567552a\" returns successfully" Jan 29 16:34:55.220331 containerd[1509]: time="2025-01-29T16:34:55.220225248Z" level=info msg="TearDown network for sandbox \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\" successfully" Jan 29 16:34:55.220331 containerd[1509]: time="2025-01-29T16:34:55.220273522Z" level=info msg="StopPodSandbox for \"44e408db046463332be46ef1bfe2595171d1ae8fc94860f0af0fd3e623a68516\" returns successfully" Jan 29 16:34:55.223092 systemd[1]: run-netns-cni\x2d2a05a7a9\x2d9a58\x2d5394\x2d5bdc\x2dd235ab2b3d4a.mount: Deactivated successfully. Jan 29 16:34:55.224513 containerd[1509]: time="2025-01-29T16:34:55.224476095Z" level=info msg="StopPodSandbox for \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\"" Jan 29 16:34:55.226015 containerd[1509]: time="2025-01-29T16:34:55.225394671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:9,}" Jan 29 16:34:55.226015 containerd[1509]: time="2025-01-29T16:34:55.225889757Z" level=info msg="TearDown network for sandbox \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\" successfully" Jan 29 16:34:55.226290 containerd[1509]: time="2025-01-29T16:34:55.225917431Z" level=info msg="StopPodSandbox for \"e61e8583e64db7f6c1499cab039f9b0e3e28071a92f3caa85acc98009ff5dcfb\" returns successfully" Jan 29 16:34:55.227932 containerd[1509]: time="2025-01-29T16:34:55.227883116Z" level=info msg="StopPodSandbox for \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\"" Jan 29 16:34:55.229303 containerd[1509]: time="2025-01-29T16:34:55.229253942Z" level=info msg="TearDown network for sandbox \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\" successfully" Jan 29 16:34:55.229439 containerd[1509]: time="2025-01-29T16:34:55.229299660Z" level=info msg="StopPodSandbox for \"bb8d762d15b8826e7e1cf2e51c1ce8d9711bf7537bb8f3a9cc53f35340ba548f\" returns successfully" Jan 29 16:34:55.237587 containerd[1509]: time="2025-01-29T16:34:55.236811134Z" level=info msg="StopPodSandbox for \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\"" Jan 29 16:34:55.237587 containerd[1509]: time="2025-01-29T16:34:55.237063619Z" level=info msg="TearDown network for sandbox \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\" successfully" Jan 29 16:34:55.237587 containerd[1509]: time="2025-01-29T16:34:55.237101134Z" level=info msg="StopPodSandbox for \"a8a1e2a86bd58db49dc4380f181bb27b3062ea3e9f577056f698e02036e854d8\" returns successfully" Jan 29 16:34:55.238178 containerd[1509]: time="2025-01-29T16:34:55.238133273Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\"" Jan 29 16:34:55.238861 containerd[1509]: time="2025-01-29T16:34:55.238394137Z" level=info msg="TearDown network for sandbox \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" successfully" Jan 29 16:34:55.238861 containerd[1509]: time="2025-01-29T16:34:55.238442968Z" level=info msg="StopPodSandbox for \"c4630c2762598b29ba5e9499a3b79e40eb41d721a96aebe1c1b055249b40a5ac\" returns successfully" Jan 29 16:34:55.241779 containerd[1509]: time="2025-01-29T16:34:55.241743277Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\"" Jan 29 16:34:55.242075 containerd[1509]: time="2025-01-29T16:34:55.242047168Z" level=info msg="TearDown network for sandbox \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" successfully" Jan 29 16:34:55.242187 containerd[1509]: time="2025-01-29T16:34:55.242163202Z" level=info msg="StopPodSandbox for \"ee363bde5bacd8ce7e34f38bb714b5938abe00f0306a18b24c1cf00239b031c6\" returns successfully" Jan 29 16:34:55.243315 containerd[1509]: time="2025-01-29T16:34:55.243284164Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\"" Jan 29 16:34:55.243593 containerd[1509]: time="2025-01-29T16:34:55.243540434Z" level=info msg="TearDown network for sandbox \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" successfully" Jan 29 16:34:55.243709 containerd[1509]: time="2025-01-29T16:34:55.243676744Z" level=info msg="StopPodSandbox for \"3dd8e533c4887ea36ee944b15ebf478733646780bf9aa109d7bf641339624631\" returns successfully" Jan 29 16:34:55.245765 containerd[1509]: time="2025-01-29T16:34:55.245052708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:7,}" Jan 29 16:34:55.247238 kubelet[1921]: I0129 16:34:55.247028 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mj245" podStartSLOduration=2.793840256 podStartE2EDuration="20.247000481s" podCreationTimestamp="2025-01-29 16:34:35 +0000 UTC" firstStartedPulling="2025-01-29 16:34:37.080572698 +0000 UTC m=+3.216991086" lastFinishedPulling="2025-01-29 16:34:54.53373294 +0000 UTC m=+20.670151311" observedRunningTime="2025-01-29 16:34:55.244718336 +0000 UTC m=+21.381136729" watchObservedRunningTime="2025-01-29 16:34:55.247000481 +0000 UTC m=+21.383418874" Jan 29 16:34:55.278250 systemd[1]: run-containerd-runc-k8s.io-d0ee69af6bb1185ef035d175062639ea2c4694cedff6ce47244b96fbba3f5b2c-runc.eAEA66.mount: Deactivated successfully. Jan 29 16:34:55.490924 systemd-networkd[1408]: cali44166183540: Link UP Jan 29 16:34:55.493404 systemd-networkd[1408]: cali44166183540: Gained carrier Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.372 [INFO][2997] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.388 [INFO][2997] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0 nginx-deployment-8587fbcb89- default 9417d7f3-11ad-4063-8c73-fced7d64fb93 1126 0 2025-01-29 16:34:48 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.128.0.87 nginx-deployment-8587fbcb89-fvdts eth0 default [] [] [kns.default ksa.default.default] cali44166183540 [] []}} ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Namespace="default" Pod="nginx-deployment-8587fbcb89-fvdts" WorkloadEndpoint="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.389 [INFO][2997] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Namespace="default" Pod="nginx-deployment-8587fbcb89-fvdts" WorkloadEndpoint="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.435 [INFO][3025] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" HandleID="k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Workload="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.448 [INFO][3025] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" HandleID="k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Workload="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003187e0), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.87", "pod":"nginx-deployment-8587fbcb89-fvdts", "timestamp":"2025-01-29 16:34:55.435872366 +0000 UTC"}, Hostname:"10.128.0.87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.448 [INFO][3025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.449 [INFO][3025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.449 [INFO][3025] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.87' Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.451 [INFO][3025] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.455 [INFO][3025] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.460 [INFO][3025] ipam/ipam.go 489: Trying affinity for 192.168.17.64/26 host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.462 [INFO][3025] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.64/26 host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.464 [INFO][3025] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.464 [INFO][3025] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.466 [INFO][3025] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389 Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.472 [INFO][3025] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.477 [INFO][3025] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.65/26] block=192.168.17.64/26 handle="k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.477 [INFO][3025] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.65/26] handle="k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" host="10.128.0.87" Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.477 [INFO][3025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:34:55.501738 containerd[1509]: 2025-01-29 16:34:55.477 [INFO][3025] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.65/26] IPv6=[] ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" HandleID="k8s-pod-network.0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Workload="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" Jan 29 16:34:55.505974 containerd[1509]: 2025-01-29 16:34:55.480 [INFO][2997] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Namespace="default" Pod="nginx-deployment-8587fbcb89-fvdts" WorkloadEndpoint="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"9417d7f3-11ad-4063-8c73-fced7d64fb93", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 34, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.87", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-fvdts", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.17.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali44166183540", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:34:55.505974 containerd[1509]: 2025-01-29 16:34:55.480 [INFO][2997] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.65/32] ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Namespace="default" Pod="nginx-deployment-8587fbcb89-fvdts" WorkloadEndpoint="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" Jan 29 16:34:55.505974 containerd[1509]: 2025-01-29 16:34:55.480 [INFO][2997] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44166183540 ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Namespace="default" Pod="nginx-deployment-8587fbcb89-fvdts" WorkloadEndpoint="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" Jan 29 16:34:55.505974 containerd[1509]: 2025-01-29 16:34:55.491 [INFO][2997] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Namespace="default" Pod="nginx-deployment-8587fbcb89-fvdts" WorkloadEndpoint="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" Jan 29 16:34:55.505974 containerd[1509]: 2025-01-29 16:34:55.491 [INFO][2997] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Namespace="default" Pod="nginx-deployment-8587fbcb89-fvdts" WorkloadEndpoint="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"9417d7f3-11ad-4063-8c73-fced7d64fb93", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 34, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.87", ContainerID:"0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389", Pod:"nginx-deployment-8587fbcb89-fvdts", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.17.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali44166183540", MAC:"e2:5e:b4:c7:5b:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:34:55.505974 containerd[1509]: 2025-01-29 16:34:55.499 [INFO][2997] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389" Namespace="default" Pod="nginx-deployment-8587fbcb89-fvdts" WorkloadEndpoint="10.128.0.87-k8s-nginx--deployment--8587fbcb89--fvdts-eth0" Jan 29 16:34:55.538780 containerd[1509]: time="2025-01-29T16:34:55.538582563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:34:55.538780 containerd[1509]: time="2025-01-29T16:34:55.538678674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:34:55.539198 containerd[1509]: time="2025-01-29T16:34:55.539053426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:34:55.540219 containerd[1509]: time="2025-01-29T16:34:55.540144192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:34:55.569171 systemd[1]: Started cri-containerd-0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389.scope - libcontainer container 0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389. Jan 29 16:34:55.598260 systemd-networkd[1408]: cali1c0bfea65a0: Link UP Jan 29 16:34:55.600019 systemd-networkd[1408]: cali1c0bfea65a0: Gained carrier Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.377 [INFO][2994] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.389 [INFO][2994] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.87-k8s-csi--node--driver--xz4l2-eth0 csi-node-driver- calico-system ec43a3f5-5f5f-4f82-a768-b19afc7730bd 1052 0 2025-01-29 16:34:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.128.0.87 csi-node-driver-xz4l2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1c0bfea65a0 [] []}} ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Namespace="calico-system" Pod="csi-node-driver-xz4l2" WorkloadEndpoint="10.128.0.87-k8s-csi--node--driver--xz4l2-" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.389 [INFO][2994] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Namespace="calico-system" Pod="csi-node-driver-xz4l2" WorkloadEndpoint="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.437 [INFO][3026] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" HandleID="k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Workload="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.451 [INFO][3026] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" HandleID="k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Workload="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.128.0.87", "pod":"csi-node-driver-xz4l2", "timestamp":"2025-01-29 16:34:55.437263787 +0000 UTC"}, Hostname:"10.128.0.87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.452 [INFO][3026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.477 [INFO][3026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.477 [INFO][3026] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.87' Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.552 [INFO][3026] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.561 [INFO][3026] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.567 [INFO][3026] ipam/ipam.go 489: Trying affinity for 192.168.17.64/26 host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.570 [INFO][3026] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.64/26 host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.574 [INFO][3026] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.574 [INFO][3026] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.576 [INFO][3026] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.581 [INFO][3026] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.589 [INFO][3026] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.66/26] block=192.168.17.64/26 handle="k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.589 [INFO][3026] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.66/26] handle="k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" host="10.128.0.87" Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.590 [INFO][3026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:34:55.619194 containerd[1509]: 2025-01-29 16:34:55.590 [INFO][3026] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.66/26] IPv6=[] ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" HandleID="k8s-pod-network.332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Workload="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" Jan 29 16:34:55.620358 containerd[1509]: 2025-01-29 16:34:55.592 [INFO][2994] cni-plugin/k8s.go 386: Populated endpoint ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Namespace="calico-system" Pod="csi-node-driver-xz4l2" WorkloadEndpoint="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.87-k8s-csi--node--driver--xz4l2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec43a3f5-5f5f-4f82-a768-b19afc7730bd", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 34, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.87", ContainerID:"", Pod:"csi-node-driver-xz4l2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c0bfea65a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:34:55.620358 containerd[1509]: 2025-01-29 16:34:55.593 [INFO][2994] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.66/32] ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Namespace="calico-system" Pod="csi-node-driver-xz4l2" WorkloadEndpoint="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" Jan 29 16:34:55.620358 containerd[1509]: 2025-01-29 16:34:55.593 [INFO][2994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c0bfea65a0 ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Namespace="calico-system" Pod="csi-node-driver-xz4l2" WorkloadEndpoint="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" Jan 29 16:34:55.620358 containerd[1509]: 2025-01-29 16:34:55.598 [INFO][2994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Namespace="calico-system" Pod="csi-node-driver-xz4l2" WorkloadEndpoint="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" Jan 29 16:34:55.620358 containerd[1509]: 2025-01-29 16:34:55.600 [INFO][2994] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Namespace="calico-system" Pod="csi-node-driver-xz4l2" WorkloadEndpoint="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.87-k8s-csi--node--driver--xz4l2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec43a3f5-5f5f-4f82-a768-b19afc7730bd", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 34, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.87", ContainerID:"332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc", Pod:"csi-node-driver-xz4l2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c0bfea65a0", MAC:"42:dc:fe:7f:8d:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:34:55.620358 containerd[1509]: 2025-01-29 16:34:55.616 [INFO][2994] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc" Namespace="calico-system" Pod="csi-node-driver-xz4l2" WorkloadEndpoint="10.128.0.87-k8s-csi--node--driver--xz4l2-eth0" Jan 29 16:34:55.645232 containerd[1509]: time="2025-01-29T16:34:55.645144084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fvdts,Uid:9417d7f3-11ad-4063-8c73-fced7d64fb93,Namespace:default,Attempt:7,} returns sandbox id \"0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389\"" Jan 29 16:34:55.648801 containerd[1509]: time="2025-01-29T16:34:55.648687021Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:34:55.664238 containerd[1509]: time="2025-01-29T16:34:55.663373199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:34:55.664238 containerd[1509]: time="2025-01-29T16:34:55.663514631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:34:55.664238 containerd[1509]: time="2025-01-29T16:34:55.663543504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:34:55.664238 containerd[1509]: time="2025-01-29T16:34:55.663675426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:34:55.694193 systemd[1]: Started cri-containerd-332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc.scope - libcontainer container 332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc. Jan 29 16:34:55.729260 containerd[1509]: time="2025-01-29T16:34:55.729181226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xz4l2,Uid:ec43a3f5-5f5f-4f82-a768-b19afc7730bd,Namespace:calico-system,Attempt:9,} returns sandbox id \"332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc\"" Jan 29 16:34:55.868287 kubelet[1921]: E0129 16:34:55.868105 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:56.608877 kernel: bpftool[3282]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 16:34:56.687716 systemd-networkd[1408]: cali44166183540: Gained IPv6LL Jan 29 16:34:56.869891 kubelet[1921]: E0129 16:34:56.869069 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:57.117292 systemd-networkd[1408]: vxlan.calico: Link UP Jan 29 16:34:57.117311 systemd-networkd[1408]: vxlan.calico: Gained carrier Jan 29 16:34:57.455085 systemd-networkd[1408]: cali1c0bfea65a0: Gained IPv6LL Jan 29 16:34:57.869530 kubelet[1921]: E0129 16:34:57.869440 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:58.845259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488619525.mount: Deactivated successfully. Jan 29 16:34:58.870525 kubelet[1921]: E0129 16:34:58.870458 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:34:59.120378 systemd-networkd[1408]: vxlan.calico: Gained IPv6LL Jan 29 16:34:59.872379 kubelet[1921]: E0129 16:34:59.872283 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:00.275198 update_engine[1500]: I20250129 16:35:00.274879 1500 update_attempter.cc:509] Updating boot flags... Jan 29 16:35:00.367887 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2955) Jan 29 16:35:00.440629 containerd[1509]: time="2025-01-29T16:35:00.439687417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:00.445870 containerd[1509]: time="2025-01-29T16:35:00.443809336Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 16:35:00.445870 containerd[1509]: time="2025-01-29T16:35:00.445055271Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:00.455572 containerd[1509]: time="2025-01-29T16:35:00.455511643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:00.459341 containerd[1509]: time="2025-01-29T16:35:00.459285868Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 4.81051914s" Jan 29 16:35:00.459341 containerd[1509]: time="2025-01-29T16:35:00.459342900Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:35:00.470723 containerd[1509]: time="2025-01-29T16:35:00.470668732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 16:35:00.487206 containerd[1509]: time="2025-01-29T16:35:00.487154609Z" level=info msg="CreateContainer within sandbox \"0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 16:35:00.519152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3643270252.mount: Deactivated successfully. Jan 29 16:35:00.522172 containerd[1509]: time="2025-01-29T16:35:00.522120763Z" level=info msg="CreateContainer within sandbox \"0c840ed927f1d1ed269e5c92cf3c495e37598f3a1ac4c8b05dbcecb46dfe0389\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"93e0bc72f50f826d33e1ea1cdc3c3b03efd859a27ed8f105a4944e5f49410aaa\"" Jan 29 16:35:00.529657 containerd[1509]: time="2025-01-29T16:35:00.528918695Z" level=info msg="StartContainer for \"93e0bc72f50f826d33e1ea1cdc3c3b03efd859a27ed8f105a4944e5f49410aaa\"" Jan 29 16:35:00.587803 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (3380) Jan 29 16:35:00.708169 systemd[1]: Started cri-containerd-93e0bc72f50f826d33e1ea1cdc3c3b03efd859a27ed8f105a4944e5f49410aaa.scope - libcontainer container 93e0bc72f50f826d33e1ea1cdc3c3b03efd859a27ed8f105a4944e5f49410aaa. Jan 29 16:35:00.776164 containerd[1509]: time="2025-01-29T16:35:00.776110933Z" level=info msg="StartContainer for \"93e0bc72f50f826d33e1ea1cdc3c3b03efd859a27ed8f105a4944e5f49410aaa\" returns successfully" Jan 29 16:35:00.873319 kubelet[1921]: E0129 16:35:00.873252 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:01.296449 kubelet[1921]: I0129 16:35:01.296139 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-fvdts" podStartSLOduration=8.480173155 podStartE2EDuration="13.296121064s" podCreationTimestamp="2025-01-29 16:34:48 +0000 UTC" firstStartedPulling="2025-01-29 16:34:55.648240622 +0000 UTC m=+21.784659005" lastFinishedPulling="2025-01-29 16:35:00.464188526 +0000 UTC m=+26.600606914" observedRunningTime="2025-01-29 16:35:01.295967788 +0000 UTC m=+27.432386180" watchObservedRunningTime="2025-01-29 16:35:01.296121064 +0000 UTC m=+27.432539473" Jan 29 16:35:01.332463 ntpd[1479]: Listen normally on 7 vxlan.calico 192.168.17.64:123 Jan 29 16:35:01.333063 ntpd[1479]: 29 Jan 16:35:01 ntpd[1479]: Listen normally on 7 vxlan.calico 192.168.17.64:123 Jan 29 16:35:01.333063 ntpd[1479]: 29 Jan 16:35:01 ntpd[1479]: Listen normally on 8 cali44166183540 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 29 16:35:01.333063 ntpd[1479]: 29 Jan 16:35:01 ntpd[1479]: Listen normally on 9 cali1c0bfea65a0 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 16:35:01.333063 ntpd[1479]: 29 Jan 16:35:01 ntpd[1479]: Listen normally on 10 vxlan.calico [fe80::6448:b8ff:fec1:2c9f%5]:123 Jan 29 16:35:01.332618 ntpd[1479]: Listen normally on 8 cali44166183540 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 29 16:35:01.332811 ntpd[1479]: Listen normally on 9 cali1c0bfea65a0 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 29 16:35:01.332902 ntpd[1479]: Listen normally on 10 vxlan.calico [fe80::6448:b8ff:fec1:2c9f%5]:123 Jan 29 16:35:01.613601 containerd[1509]: time="2025-01-29T16:35:01.613533432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:01.615126 containerd[1509]: time="2025-01-29T16:35:01.615056439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 16:35:01.616852 containerd[1509]: time="2025-01-29T16:35:01.616552238Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:01.621752 containerd[1509]: time="2025-01-29T16:35:01.621673801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:01.622848 containerd[1509]: time="2025-01-29T16:35:01.622580631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.151849161s" Jan 29 16:35:01.622848 containerd[1509]: time="2025-01-29T16:35:01.622637470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 16:35:01.626006 containerd[1509]: time="2025-01-29T16:35:01.625962300Z" level=info msg="CreateContainer within sandbox \"332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 16:35:01.651048 containerd[1509]: time="2025-01-29T16:35:01.650995648Z" level=info msg="CreateContainer within sandbox \"332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"490ccc713bb6e36431173b4885b7b7979a2d9ea7e7b71aebd0ed9c5d687cc03f\"" Jan 29 16:35:01.651990 containerd[1509]: time="2025-01-29T16:35:01.651893116Z" level=info msg="StartContainer for \"490ccc713bb6e36431173b4885b7b7979a2d9ea7e7b71aebd0ed9c5d687cc03f\"" Jan 29 16:35:01.702102 systemd[1]: Started cri-containerd-490ccc713bb6e36431173b4885b7b7979a2d9ea7e7b71aebd0ed9c5d687cc03f.scope - libcontainer container 490ccc713bb6e36431173b4885b7b7979a2d9ea7e7b71aebd0ed9c5d687cc03f. Jan 29 16:35:01.748104 containerd[1509]: time="2025-01-29T16:35:01.748033480Z" level=info msg="StartContainer for \"490ccc713bb6e36431173b4885b7b7979a2d9ea7e7b71aebd0ed9c5d687cc03f\" returns successfully" Jan 29 16:35:01.750645 containerd[1509]: time="2025-01-29T16:35:01.750597660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 16:35:01.873783 kubelet[1921]: E0129 16:35:01.873604 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:02.875707 kubelet[1921]: E0129 16:35:02.875656 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:02.905874 containerd[1509]: time="2025-01-29T16:35:02.905797400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:02.907864 containerd[1509]: time="2025-01-29T16:35:02.907482534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 16:35:02.910666 containerd[1509]: time="2025-01-29T16:35:02.909180339Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:02.912778 containerd[1509]: time="2025-01-29T16:35:02.912725231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:02.913884 containerd[1509]: time="2025-01-29T16:35:02.913818383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.163166359s" Jan 29 16:35:02.914047 containerd[1509]: time="2025-01-29T16:35:02.914021566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 16:35:02.917062 containerd[1509]: time="2025-01-29T16:35:02.917020808Z" level=info msg="CreateContainer within sandbox \"332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 16:35:02.939173 containerd[1509]: time="2025-01-29T16:35:02.939103622Z" level=info msg="CreateContainer within sandbox \"332da3641e62aaec43faf1dc7e1830a2be034d1bce48762d8b83ba38a84c54dc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"19aa6e4c3f75a6026ae7f38472fa3174218b807bfe89e10909e72c239fe0c21b\"" Jan 29 16:35:02.939914 containerd[1509]: time="2025-01-29T16:35:02.939785452Z" level=info msg="StartContainer for \"19aa6e4c3f75a6026ae7f38472fa3174218b807bfe89e10909e72c239fe0c21b\"" Jan 29 16:35:02.984108 systemd[1]: run-containerd-runc-k8s.io-19aa6e4c3f75a6026ae7f38472fa3174218b807bfe89e10909e72c239fe0c21b-runc.x4B5Fa.mount: Deactivated successfully. Jan 29 16:35:02.993109 systemd[1]: Started cri-containerd-19aa6e4c3f75a6026ae7f38472fa3174218b807bfe89e10909e72c239fe0c21b.scope - libcontainer container 19aa6e4c3f75a6026ae7f38472fa3174218b807bfe89e10909e72c239fe0c21b. Jan 29 16:35:03.040595 containerd[1509]: time="2025-01-29T16:35:03.040541247Z" level=info msg="StartContainer for \"19aa6e4c3f75a6026ae7f38472fa3174218b807bfe89e10909e72c239fe0c21b\" returns successfully" Jan 29 16:35:03.316024 kubelet[1921]: I0129 16:35:03.315757 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xz4l2" podStartSLOduration=21.131806914 podStartE2EDuration="28.315724073s" podCreationTimestamp="2025-01-29 16:34:35 +0000 UTC" firstStartedPulling="2025-01-29 16:34:55.731305431 +0000 UTC m=+21.867723809" lastFinishedPulling="2025-01-29 16:35:02.915222584 +0000 UTC m=+29.051640968" observedRunningTime="2025-01-29 16:35:03.315245373 +0000 UTC m=+29.451663765" watchObservedRunningTime="2025-01-29 16:35:03.315724073 +0000 UTC m=+29.452142493" Jan 29 16:35:03.876314 kubelet[1921]: E0129 16:35:03.876244 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:04.003282 kubelet[1921]: I0129 16:35:04.003225 1921 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 16:35:04.003282 kubelet[1921]: I0129 16:35:04.003265 1921 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 16:35:04.877511 kubelet[1921]: E0129 16:35:04.877446 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:05.878151 kubelet[1921]: E0129 16:35:05.878082 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:06.878607 kubelet[1921]: E0129 16:35:06.878532 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:07.879047 kubelet[1921]: E0129 16:35:07.878979 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:08.879737 kubelet[1921]: E0129 16:35:08.879659 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:09.179138 systemd[1]: Created slice kubepods-besteffort-pod7ada7ad8_599d_468f_80b4_e273bde49cdb.slice - libcontainer container kubepods-besteffort-pod7ada7ad8_599d_468f_80b4_e273bde49cdb.slice. Jan 29 16:35:09.326312 kubelet[1921]: I0129 16:35:09.326235 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7ada7ad8-599d-468f-80b4-e273bde49cdb-data\") pod \"nfs-server-provisioner-0\" (UID: \"7ada7ad8-599d-468f-80b4-e273bde49cdb\") " pod="default/nfs-server-provisioner-0" Jan 29 16:35:09.326312 kubelet[1921]: I0129 16:35:09.326320 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7fzx\" (UniqueName: \"kubernetes.io/projected/7ada7ad8-599d-468f-80b4-e273bde49cdb-kube-api-access-v7fzx\") pod \"nfs-server-provisioner-0\" (UID: \"7ada7ad8-599d-468f-80b4-e273bde49cdb\") " pod="default/nfs-server-provisioner-0" Jan 29 16:35:09.483632 containerd[1509]: time="2025-01-29T16:35:09.483479176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7ada7ad8-599d-468f-80b4-e273bde49cdb,Namespace:default,Attempt:0,}" Jan 29 16:35:09.636923 systemd-networkd[1408]: cali60e51b789ff: Link UP Jan 29 16:35:09.638992 systemd-networkd[1408]: cali60e51b789ff: Gained carrier Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.548 [INFO][3580] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.87-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 7ada7ad8-599d-468f-80b4-e273bde49cdb 1250 0 2025-01-29 16:35:09 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.128.0.87 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.87-k8s-nfs--server--provisioner--0-" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.549 [INFO][3580] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.583 [INFO][3590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" HandleID="k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Workload="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.600 [INFO][3590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" HandleID="k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Workload="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319420), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.87", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 16:35:09.583002209 +0000 UTC"}, Hostname:"10.128.0.87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.600 [INFO][3590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.600 [INFO][3590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.601 [INFO][3590] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.87' Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.603 [INFO][3590] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.608 [INFO][3590] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.613 [INFO][3590] ipam/ipam.go 489: Trying affinity for 192.168.17.64/26 host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.615 [INFO][3590] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.64/26 host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.617 [INFO][3590] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.617 [INFO][3590] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.619 [INFO][3590] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193 Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.623 [INFO][3590] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.630 [INFO][3590] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.67/26] block=192.168.17.64/26 handle="k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.630 [INFO][3590] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.67/26] handle="k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" host="10.128.0.87" Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.630 [INFO][3590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:35:09.652780 containerd[1509]: 2025-01-29 16:35:09.630 [INFO][3590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.67/26] IPv6=[] ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" HandleID="k8s-pod-network.54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Workload="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:35:09.654653 containerd[1509]: 2025-01-29 16:35:09.632 [INFO][3580] cni-plugin/k8s.go 386: Populated endpoint ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.87-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7ada7ad8-599d-468f-80b4-e273bde49cdb", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.87", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.17.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:35:09.654653 containerd[1509]: 2025-01-29 16:35:09.632 [INFO][3580] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.67/32] ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:35:09.654653 containerd[1509]: 2025-01-29 16:35:09.632 [INFO][3580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:35:09.654653 containerd[1509]: 2025-01-29 16:35:09.638 [INFO][3580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:35:09.655221 containerd[1509]: 2025-01-29 16:35:09.640 [INFO][3580] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.87-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7ada7ad8-599d-468f-80b4-e273bde49cdb", ResourceVersion:"1250", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.87", ContainerID:"54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.17.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"5a:9b:c8:92:9d:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:35:09.655221 containerd[1509]: 2025-01-29 16:35:09.651 [INFO][3580] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.87-k8s-nfs--server--provisioner--0-eth0" Jan 29 16:35:09.690642 containerd[1509]: time="2025-01-29T16:35:09.690211823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:35:09.690642 containerd[1509]: time="2025-01-29T16:35:09.690326636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:35:09.690642 containerd[1509]: time="2025-01-29T16:35:09.690365254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:35:09.690642 containerd[1509]: time="2025-01-29T16:35:09.690496033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:35:09.727146 systemd[1]: Started cri-containerd-54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193.scope - libcontainer container 54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193. Jan 29 16:35:09.782995 containerd[1509]: time="2025-01-29T16:35:09.782796592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7ada7ad8-599d-468f-80b4-e273bde49cdb,Namespace:default,Attempt:0,} returns sandbox id \"54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193\"" Jan 29 16:35:09.785176 containerd[1509]: time="2025-01-29T16:35:09.785138661Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 16:35:09.879899 kubelet[1921]: E0129 16:35:09.879786 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:10.880895 kubelet[1921]: E0129 16:35:10.880778 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:10.895078 systemd-networkd[1408]: cali60e51b789ff: Gained IPv6LL Jan 29 16:35:11.881009 kubelet[1921]: E0129 16:35:11.880930 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:12.322468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236975996.mount: Deactivated successfully. Jan 29 16:35:12.882006 kubelet[1921]: E0129 16:35:12.881835 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:13.332691 ntpd[1479]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 16:35:13.333396 ntpd[1479]: 29 Jan 16:35:13 ntpd[1479]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jan 29 16:35:13.882498 kubelet[1921]: E0129 16:35:13.882449 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:14.853092 kubelet[1921]: E0129 16:35:14.853028 1921 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:14.883067 kubelet[1921]: E0129 16:35:14.882945 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:15.372271 containerd[1509]: time="2025-01-29T16:35:15.372207717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:15.374019 containerd[1509]: time="2025-01-29T16:35:15.373949980Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91045236" Jan 29 16:35:15.375951 containerd[1509]: time="2025-01-29T16:35:15.375810705Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:15.380796 containerd[1509]: time="2025-01-29T16:35:15.380710753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:15.382576 containerd[1509]: time="2025-01-29T16:35:15.382363022Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.597180734s" Jan 29 16:35:15.382576 containerd[1509]: time="2025-01-29T16:35:15.382451803Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 16:35:15.386312 containerd[1509]: time="2025-01-29T16:35:15.386267998Z" level=info msg="CreateContainer within sandbox \"54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 16:35:15.407846 containerd[1509]: time="2025-01-29T16:35:15.407750278Z" level=info msg="CreateContainer within sandbox \"54e518d677b446f4eea193eb43f41740389b01a1d96adb695f941da18ac53193\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"aaabc351026c8fdee3f663957b00d6d05da6cf07631938b5794d1454d8d59390\"" Jan 29 16:35:15.408675 containerd[1509]: time="2025-01-29T16:35:15.408620502Z" level=info msg="StartContainer for \"aaabc351026c8fdee3f663957b00d6d05da6cf07631938b5794d1454d8d59390\"" Jan 29 16:35:15.454122 systemd[1]: Started cri-containerd-aaabc351026c8fdee3f663957b00d6d05da6cf07631938b5794d1454d8d59390.scope - libcontainer container aaabc351026c8fdee3f663957b00d6d05da6cf07631938b5794d1454d8d59390. Jan 29 16:35:15.491396 containerd[1509]: time="2025-01-29T16:35:15.491323028Z" level=info msg="StartContainer for \"aaabc351026c8fdee3f663957b00d6d05da6cf07631938b5794d1454d8d59390\" returns successfully" Jan 29 16:35:15.883301 kubelet[1921]: E0129 16:35:15.883220 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:16.405585 kubelet[1921]: I0129 16:35:16.405489 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.805925485 podStartE2EDuration="7.405466095s" podCreationTimestamp="2025-01-29 16:35:09 +0000 UTC" firstStartedPulling="2025-01-29 16:35:09.784606404 +0000 UTC m=+35.921024787" lastFinishedPulling="2025-01-29 16:35:15.384147024 +0000 UTC m=+41.520565397" observedRunningTime="2025-01-29 16:35:16.405305605 +0000 UTC m=+42.541723998" watchObservedRunningTime="2025-01-29 16:35:16.405466095 +0000 UTC m=+42.541884505" Jan 29 16:35:16.884129 kubelet[1921]: E0129 16:35:16.884053 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:17.884917 kubelet[1921]: E0129 16:35:17.884846 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:18.885780 kubelet[1921]: E0129 16:35:18.885675 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:19.886515 kubelet[1921]: E0129 16:35:19.886435 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:20.886685 kubelet[1921]: E0129 16:35:20.886599 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:21.887443 kubelet[1921]: E0129 16:35:21.887385 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:22.887905 kubelet[1921]: E0129 16:35:22.887813 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:23.888117 kubelet[1921]: E0129 16:35:23.888036 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:24.889236 kubelet[1921]: E0129 16:35:24.889165 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:24.967668 systemd[1]: Created slice kubepods-besteffort-podc5b9a9d4_f2dc_4886_b3a8_0e4c1f3ef9e3.slice - libcontainer container kubepods-besteffort-podc5b9a9d4_f2dc_4886_b3a8_0e4c1f3ef9e3.slice. Jan 29 16:35:25.115782 kubelet[1921]: I0129 16:35:25.115706 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9ppk\" (UniqueName: \"kubernetes.io/projected/c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3-kube-api-access-b9ppk\") pod \"test-pod-1\" (UID: \"c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3\") " pod="default/test-pod-1" Jan 29 16:35:25.115782 kubelet[1921]: I0129 16:35:25.115782 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5fceb41a-1db2-4827-8c50-5a555dd94df4\" (UniqueName: \"kubernetes.io/nfs/c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3-pvc-5fceb41a-1db2-4827-8c50-5a555dd94df4\") pod \"test-pod-1\" (UID: \"c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3\") " pod="default/test-pod-1" Jan 29 16:35:25.257935 kernel: FS-Cache: Loaded Jan 29 16:35:25.342174 kernel: RPC: Registered named UNIX socket transport module. Jan 29 16:35:25.342363 kernel: RPC: Registered udp transport module. Jan 29 16:35:25.342403 kernel: RPC: Registered tcp transport module. Jan 29 16:35:25.347111 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 16:35:25.352606 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 16:35:25.640005 kernel: NFS: Registering the id_resolver key type Jan 29 16:35:25.640190 kernel: Key type id_resolver registered Jan 29 16:35:25.640234 kernel: Key type id_legacy registered Jan 29 16:35:25.694644 nfsidmap[3780]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jan 29 16:35:25.708669 nfsidmap[3781]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jan 29 16:35:25.872960 containerd[1509]: time="2025-01-29T16:35:25.872893504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3,Namespace:default,Attempt:0,}" Jan 29 16:35:25.890802 kubelet[1921]: E0129 16:35:25.890160 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:26.033869 systemd-networkd[1408]: cali5ec59c6bf6e: Link UP Jan 29 16:35:26.035778 systemd-networkd[1408]: cali5ec59c6bf6e: Gained carrier Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:25.942 [INFO][3782] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.87-k8s-test--pod--1-eth0 default c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3 1313 0 2025-01-29 16:35:09 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.128.0.87 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.87-k8s-test--pod--1-" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:25.942 [INFO][3782] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.87-k8s-test--pod--1-eth0" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:25.979 [INFO][3793] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" HandleID="k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Workload="10.128.0.87-k8s-test--pod--1-eth0" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:25.992 [INFO][3793] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" HandleID="k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Workload="10.128.0.87-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fe170), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.87", "pod":"test-pod-1", "timestamp":"2025-01-29 16:35:25.979451782 +0000 UTC"}, Hostname:"10.128.0.87", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:25.992 [INFO][3793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:25.992 [INFO][3793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:25.993 [INFO][3793] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.87' Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:25.996 [INFO][3793] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.002 [INFO][3793] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.007 [INFO][3793] ipam/ipam.go 489: Trying affinity for 192.168.17.64/26 host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.010 [INFO][3793] ipam/ipam.go 155: Attempting to load block cidr=192.168.17.64/26 host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.012 [INFO][3793] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.013 [INFO][3793] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.014 [INFO][3793] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.019 [INFO][3793] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.027 [INFO][3793] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.17.68/26] block=192.168.17.64/26 handle="k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.027 [INFO][3793] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.17.68/26] handle="k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" host="10.128.0.87" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.027 [INFO][3793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.027 [INFO][3793] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.17.68/26] IPv6=[] ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" HandleID="k8s-pod-network.0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Workload="10.128.0.87-k8s-test--pod--1-eth0" Jan 29 16:35:26.051938 containerd[1509]: 2025-01-29 16:35:26.029 [INFO][3782] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.87-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.87-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3", ResourceVersion:"1313", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.87", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.17.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:35:26.054147 containerd[1509]: 2025-01-29 16:35:26.029 [INFO][3782] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.17.68/32] ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.87-k8s-test--pod--1-eth0" Jan 29 16:35:26.054147 containerd[1509]: 2025-01-29 16:35:26.029 [INFO][3782] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.87-k8s-test--pod--1-eth0" Jan 29 16:35:26.054147 containerd[1509]: 2025-01-29 16:35:26.034 [INFO][3782] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.87-k8s-test--pod--1-eth0" Jan 29 16:35:26.054147 containerd[1509]: 2025-01-29 16:35:26.035 [INFO][3782] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.87-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.87-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3", ResourceVersion:"1313", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 35, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.87", ContainerID:"0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.17.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"d6:74:2d:4d:d6:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:35:26.054147 containerd[1509]: 2025-01-29 16:35:26.046 [INFO][3782] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.87-k8s-test--pod--1-eth0" Jan 29 16:35:26.089792 containerd[1509]: time="2025-01-29T16:35:26.088766508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:35:26.090116 containerd[1509]: time="2025-01-29T16:35:26.089790849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:35:26.090116 containerd[1509]: time="2025-01-29T16:35:26.089875573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:35:26.090116 containerd[1509]: time="2025-01-29T16:35:26.090046283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:35:26.122163 systemd[1]: Started cri-containerd-0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d.scope - libcontainer container 0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d. Jan 29 16:35:26.190852 containerd[1509]: time="2025-01-29T16:35:26.190670060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c5b9a9d4-f2dc-4886-b3a8-0e4c1f3ef9e3,Namespace:default,Attempt:0,} returns sandbox id \"0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d\"" Jan 29 16:35:26.194313 containerd[1509]: time="2025-01-29T16:35:26.194060809Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 16:35:26.430115 containerd[1509]: time="2025-01-29T16:35:26.430040626Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:35:26.431512 containerd[1509]: time="2025-01-29T16:35:26.431380246Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 16:35:26.436121 containerd[1509]: time="2025-01-29T16:35:26.436067494Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 241.956477ms" Jan 29 16:35:26.436121 containerd[1509]: time="2025-01-29T16:35:26.436119938Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 16:35:26.439169 containerd[1509]: time="2025-01-29T16:35:26.439113735Z" level=info msg="CreateContainer within sandbox \"0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 16:35:26.463272 containerd[1509]: time="2025-01-29T16:35:26.463196242Z" level=info msg="CreateContainer within sandbox \"0e476d868956c490f24923f2c1eae69da9c6232806797d56cab618724b0f398d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"476640ae9cd97ad702eba3f6ed831800a9ce069d95bffce413e546263b09bd1d\"" Jan 29 16:35:26.464203 containerd[1509]: time="2025-01-29T16:35:26.464161371Z" level=info msg="StartContainer for \"476640ae9cd97ad702eba3f6ed831800a9ce069d95bffce413e546263b09bd1d\"" Jan 29 16:35:26.507668 systemd[1]: run-containerd-runc-k8s.io-476640ae9cd97ad702eba3f6ed831800a9ce069d95bffce413e546263b09bd1d-runc.uwIhmq.mount: Deactivated successfully. Jan 29 16:35:26.519161 systemd[1]: Started cri-containerd-476640ae9cd97ad702eba3f6ed831800a9ce069d95bffce413e546263b09bd1d.scope - libcontainer container 476640ae9cd97ad702eba3f6ed831800a9ce069d95bffce413e546263b09bd1d. Jan 29 16:35:26.559404 containerd[1509]: time="2025-01-29T16:35:26.559233780Z" level=info msg="StartContainer for \"476640ae9cd97ad702eba3f6ed831800a9ce069d95bffce413e546263b09bd1d\" returns successfully" Jan 29 16:35:26.891359 kubelet[1921]: E0129 16:35:26.891277 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:27.446158 kubelet[1921]: I0129 16:35:27.446083 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.202538597 podStartE2EDuration="18.446056648s" podCreationTimestamp="2025-01-29 16:35:09 +0000 UTC" firstStartedPulling="2025-01-29 16:35:26.19353411 +0000 UTC m=+52.329952493" lastFinishedPulling="2025-01-29 16:35:26.437052167 +0000 UTC m=+52.573470544" observedRunningTime="2025-01-29 16:35:27.445712164 +0000 UTC m=+53.582130555" watchObservedRunningTime="2025-01-29 16:35:27.446056648 +0000 UTC m=+53.582475040" Jan 29 16:35:27.663472 systemd-networkd[1408]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 16:35:27.892019 kubelet[1921]: E0129 16:35:27.891921 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:28.892685 kubelet[1921]: E0129 16:35:28.892607 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:29.893013 kubelet[1921]: E0129 16:35:29.892940 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 16:35:30.332568 ntpd[1479]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 16:35:30.333102 ntpd[1479]: 29 Jan 16:35:30 ntpd[1479]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123 Jan 29 16:35:30.893521 kubelet[1921]: E0129 16:35:30.893444 1921 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"