Jan 13 20:40:25.097586 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:40:25.097659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:25.097702 kernel: BIOS-provided physical RAM map: Jan 13 20:40:25.097727 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 20:40:25.097739 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 20:40:25.097752 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 20:40:25.097768 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 20:40:25.097783 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 20:40:25.097802 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd325fff] usable Jan 13 20:40:25.097816 kernel: BIOS-e820: [mem 0x00000000bd326000-0x00000000bd32dfff] ACPI data Jan 13 20:40:25.097828 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Jan 13 20:40:25.097843 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 13 20:40:25.097882 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 20:40:25.097896 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 20:40:25.097918 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 20:40:25.097933 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 20:40:25.097949 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 20:40:25.097971 kernel: NX (Execute Disable) protection: active Jan 13 20:40:25.097993 kernel: APIC: Static calls initialized Jan 13 20:40:25.098014 kernel: efi: EFI v2.7 by EDK II Jan 13 20:40:25.098028 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd326018 Jan 13 20:40:25.098043 kernel: random: crng init done Jan 13 20:40:25.098058 kernel: secureboot: Secure boot disabled Jan 13 20:40:25.098072 kernel: SMBIOS 2.4 present. Jan 13 20:40:25.098093 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 20:40:25.098109 kernel: Hypervisor detected: KVM Jan 13 20:40:25.098123 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:40:25.098138 kernel: kvm-clock: using sched offset of 13043145796 cycles Jan 13 20:40:25.098154 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:40:25.098170 kernel: tsc: Detected 2299.998 MHz processor Jan 13 20:40:25.098187 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:40:25.098203 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:40:25.098219 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 20:40:25.098234 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 20:40:25.098253 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:40:25.098267 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 20:40:25.098281 kernel: Using GB pages for direct mapping Jan 13 20:40:25.098295 kernel: ACPI: Early table checksum verification disabled Jan 13 20:40:25.098310 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 20:40:25.098326 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 20:40:25.098348 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 20:40:25.098368 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 20:40:25.098384 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 20:40:25.098400 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 20:40:25.098418 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 20:40:25.098433 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 20:40:25.098449 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 20:40:25.098466 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 20:40:25.098488 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 20:40:25.098505 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 20:40:25.098523 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 20:40:25.098541 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 20:40:25.098558 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 20:40:25.098575 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 20:40:25.098592 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 20:40:25.098609 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 20:40:25.098627 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 20:40:25.098648 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 20:40:25.098665 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:40:25.098682 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:40:25.098700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 20:40:25.098717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 20:40:25.098735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 20:40:25.098752 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 20:40:25.098770 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 20:40:25.098787 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 13 20:40:25.098809 kernel: Zone ranges: Jan 13 20:40:25.098826 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:40:25.098844 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:40:25.098920 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 20:40:25.098936 kernel: Movable zone start for each node Jan 13 20:40:25.098961 kernel: Early memory node ranges Jan 13 20:40:25.098977 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 20:40:25.098991 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 20:40:25.099006 kernel: node 0: [mem 0x0000000000100000-0x00000000bd325fff] Jan 13 20:40:25.099028 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Jan 13 20:40:25.099046 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 20:40:25.099062 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 20:40:25.099078 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 20:40:25.099094 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:40:25.099109 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 20:40:25.099123 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 20:40:25.099139 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jan 13 20:40:25.099155 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 20:40:25.099176 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 20:40:25.099193 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:40:25.099209 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:40:25.099226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:40:25.099243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:40:25.099260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:40:25.099276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:40:25.099293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:40:25.099310 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:40:25.099331 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:40:25.099348 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 20:40:25.099364 kernel: Booting paravirtualized kernel on KVM Jan 13 20:40:25.099382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:40:25.099399 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:40:25.099416 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:40:25.099433 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:40:25.099448 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:40:25.099464 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:40:25.099485 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:40:25.099503 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:25.099521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:40:25.099536 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 20:40:25.099554 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:40:25.099569 kernel: Fallback order for Node 0: 0 Jan 13 20:40:25.099586 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Jan 13 20:40:25.099604 kernel: Policy zone: Normal Jan 13 20:40:25.099626 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:40:25.099642 kernel: software IO TLB: area num 2. Jan 13 20:40:25.099659 kernel: Memory: 7511308K/7860552K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 348988K reserved, 0K cma-reserved) Jan 13 20:40:25.099676 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:40:25.099694 kernel: Kernel/User page tables isolation: enabled Jan 13 20:40:25.099712 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:40:25.099729 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:40:25.099745 kernel: Dynamic Preempt: voluntary Jan 13 20:40:25.099779 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:40:25.099797 kernel: rcu: RCU event tracing is enabled. Jan 13 20:40:25.099814 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:40:25.099831 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:40:25.099869 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:40:25.099887 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:40:25.099946 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:40:25.099972 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:40:25.099988 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:40:25.100010 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:40:25.100029 kernel: Console: colour dummy device 80x25 Jan 13 20:40:25.100046 kernel: printk: console [ttyS0] enabled Jan 13 20:40:25.100065 kernel: ACPI: Core revision 20230628 Jan 13 20:40:25.100082 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:40:25.100100 kernel: x2apic enabled Jan 13 20:40:25.100118 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:40:25.100136 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 20:40:25.100155 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 20:40:25.100178 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 20:40:25.100196 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 20:40:25.100214 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 20:40:25.100232 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:40:25.100248 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:40:25.100266 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:40:25.100283 kernel: Spectre V2 : Mitigation: IBRS Jan 13 20:40:25.100301 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:40:25.100319 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:40:25.100340 kernel: RETBleed: Mitigation: IBRS Jan 13 20:40:25.100359 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:40:25.100376 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 20:40:25.100394 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:40:25.100412 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 20:40:25.100430 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:40:25.100448 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:40:25.100466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:40:25.100483 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:40:25.100505 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:40:25.100522 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 20:40:25.100540 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:40:25.100558 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:40:25.100575 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:40:25.100593 kernel: landlock: Up and running. Jan 13 20:40:25.100610 kernel: SELinux: Initializing. Jan 13 20:40:25.100628 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:40:25.100645 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:40:25.100666 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 20:40:25.100684 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:40:25.100702 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:40:25.100720 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:40:25.100737 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 20:40:25.100755 kernel: signal: max sigframe size: 1776 Jan 13 20:40:25.100772 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:40:25.100790 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:40:25.100812 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:40:25.100829 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:40:25.100847 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:40:25.100879 kernel: .... node #0, CPUs: #1 Jan 13 20:40:25.100898 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:40:25.100916 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:40:25.100931 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:40:25.100948 kernel: smpboot: Max logical packages: 1 Jan 13 20:40:25.100971 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 20:40:25.100992 kernel: devtmpfs: initialized Jan 13 20:40:25.101008 kernel: x86/mm: Memory block size: 128MB Jan 13 20:40:25.101024 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 20:40:25.101040 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:40:25.101056 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:40:25.101071 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:40:25.101088 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:40:25.101104 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:40:25.101122 kernel: audit: type=2000 audit(1736800824.262:1): state=initialized audit_enabled=0 res=1 Jan 13 20:40:25.101145 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:40:25.101163 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:40:25.101181 kernel: cpuidle: using governor menu Jan 13 20:40:25.101198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:40:25.101215 kernel: dca service started, version 1.12.1 Jan 13 20:40:25.101230 kernel: PCI: Using configuration type 1 for base access Jan 13 20:40:25.101247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:40:25.101263 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:40:25.101281 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:40:25.101303 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:40:25.101321 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:40:25.101341 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:40:25.101357 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:40:25.101375 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:40:25.101393 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:40:25.101411 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:40:25.101429 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:40:25.101446 kernel: ACPI: Interpreter enabled Jan 13 20:40:25.101468 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:40:25.101485 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:40:25.101503 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:40:25.101520 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 20:40:25.101538 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:40:25.101556 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:40:25.101845 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:40:25.103155 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:40:25.103338 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:40:25.103360 kernel: PCI host bridge to bus 0000:00 Jan 13 20:40:25.103529 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:40:25.103687 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:40:25.103843 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:40:25.105123 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 20:40:25.105315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:40:25.105542 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:40:25.105754 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 20:40:25.106009 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:40:25.106209 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:40:25.106418 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 20:40:25.106623 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 20:40:25.106820 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 20:40:25.108141 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:40:25.108342 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 20:40:25.108531 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 20:40:25.108724 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:40:25.110797 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 20:40:25.111081 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 20:40:25.111108 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:40:25.111127 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:40:25.111146 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:40:25.111164 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:40:25.111183 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:40:25.111201 kernel: iommu: Default domain type: Translated Jan 13 20:40:25.111220 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:40:25.111239 kernel: efivars: Registered efivars operations Jan 13 20:40:25.111263 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:40:25.111281 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:40:25.111300 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 20:40:25.111318 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 20:40:25.111336 kernel: e820: reserve RAM buffer [mem 0xbd326000-0xbfffffff] Jan 13 20:40:25.111354 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 20:40:25.111372 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 20:40:25.111390 kernel: vgaarb: loaded Jan 13 20:40:25.111408 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:40:25.111431 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:40:25.111450 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:40:25.111468 kernel: pnp: PnP ACPI init Jan 13 20:40:25.111487 kernel: pnp: PnP ACPI: found 7 devices Jan 13 20:40:25.111506 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:40:25.111525 kernel: NET: Registered PF_INET protocol family Jan 13 20:40:25.111544 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:40:25.111563 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 20:40:25.111581 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:40:25.111604 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:40:25.111623 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 20:40:25.111641 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 20:40:25.111660 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 20:40:25.111679 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 20:40:25.111697 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:40:25.111716 kernel: NET: Registered PF_XDP protocol family Jan 13 20:40:25.111944 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:40:25.112126 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:40:25.112301 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:40:25.112464 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 20:40:25.112649 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:40:25.112674 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:40:25.112693 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:40:25.112713 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 20:40:25.112731 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:40:25.112756 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 20:40:25.112775 kernel: clocksource: Switched to clocksource tsc Jan 13 20:40:25.112793 kernel: Initialise system trusted keyrings Jan 13 20:40:25.112811 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 20:40:25.112830 kernel: Key type asymmetric registered Jan 13 20:40:25.112848 kernel: Asymmetric key parser 'x509' registered Jan 13 20:40:25.112881 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:40:25.112898 kernel: io scheduler mq-deadline registered Jan 13 20:40:25.112915 kernel: io scheduler kyber registered Jan 13 20:40:25.112937 kernel: io scheduler bfq registered Jan 13 20:40:25.112962 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:40:25.112980 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:40:25.113175 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 20:40:25.113198 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 20:40:25.113380 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 20:40:25.113404 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:40:25.113585 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 20:40:25.113615 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:40:25.113632 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:40:25.113651 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 20:40:25.113668 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 20:40:25.113686 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 20:40:25.115349 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 20:40:25.115385 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:40:25.115405 kernel: i8042: Warning: Keylock active Jan 13 20:40:25.115430 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:40:25.115448 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:40:25.115652 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:40:25.115829 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:40:25.117106 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:40:24 UTC (1736800824) Jan 13 20:40:25.117306 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:40:25.117332 kernel: intel_pstate: CPU model not supported Jan 13 20:40:25.117352 kernel: pstore: Using crash dump compression: deflate Jan 13 20:40:25.117379 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 20:40:25.117398 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:40:25.117417 kernel: Segment Routing with IPv6 Jan 13 20:40:25.117435 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:40:25.117454 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:40:25.117473 kernel: Key type dns_resolver registered Jan 13 20:40:25.117493 kernel: IPI shorthand broadcast: enabled Jan 13 20:40:25.117513 kernel: sched_clock: Marking stable (834004097, 139188186)->(997038673, -23846390) Jan 13 20:40:25.117533 kernel: registered taskstats version 1 Jan 13 20:40:25.117557 kernel: Loading compiled-in X.509 certificates Jan 13 20:40:25.117576 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:40:25.117594 kernel: Key type .fscrypt registered Jan 13 20:40:25.117613 kernel: Key type fscrypt-provisioning registered Jan 13 20:40:25.117632 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:40:25.117651 kernel: ima: No architecture policies found Jan 13 20:40:25.117671 kernel: clk: Disabling unused clocks Jan 13 20:40:25.117690 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:40:25.117709 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:40:25.117732 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:40:25.117750 kernel: Run /init as init process Jan 13 20:40:25.117769 kernel: with arguments: Jan 13 20:40:25.117787 kernel: /init Jan 13 20:40:25.117806 kernel: with environment: Jan 13 20:40:25.117824 kernel: HOME=/ Jan 13 20:40:25.117843 kernel: TERM=linux Jan 13 20:40:25.119902 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:40:25.119927 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:40:25.119965 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:40:25.119989 systemd[1]: Detected virtualization google. Jan 13 20:40:25.120009 systemd[1]: Detected architecture x86-64. Jan 13 20:40:25.120028 systemd[1]: Running in initrd. Jan 13 20:40:25.120047 systemd[1]: No hostname configured, using default hostname. Jan 13 20:40:25.120066 systemd[1]: Hostname set to . Jan 13 20:40:25.120086 systemd[1]: Initializing machine ID from random generator. Jan 13 20:40:25.120110 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:40:25.120129 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:25.120149 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:25.120170 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:40:25.120189 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:40:25.120208 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:40:25.120229 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:40:25.120257 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:40:25.120293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:40:25.120318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:25.120339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:25.120359 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:40:25.120379 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:40:25.120403 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:40:25.120423 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:40:25.120443 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:40:25.120463 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:40:25.120484 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:40:25.120504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:40:25.120524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:25.120545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:25.120569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:25.120589 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:40:25.120609 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:40:25.120630 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:40:25.120651 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:40:25.120671 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:40:25.120695 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:40:25.120716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:40:25.120736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:25.120759 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:40:25.120780 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:25.120839 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 20:40:25.120912 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:40:25.120939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:40:25.120968 systemd-journald[184]: Journal started Jan 13 20:40:25.121013 systemd-journald[184]: Runtime Journal (/run/log/journal/b6f300229d7c40e0a8576a9a8112954b) is 8.0M, max 148.6M, 140.6M free. Jan 13 20:40:25.098263 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 20:40:25.124820 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:40:25.132674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:40:25.141553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:25.153077 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:25.160665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:40:25.170033 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:40:25.170074 kernel: Bridge firewalling registered Jan 13 20:40:25.166500 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 20:40:25.171314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:40:25.172698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:25.188812 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:25.190466 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:25.198735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:25.206262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:25.215244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:25.224074 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:40:25.234069 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:40:25.256778 dracut-cmdline[219]: dracut-dracut-053 Jan 13 20:40:25.261132 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:25.284721 systemd-resolved[220]: Positive Trust Anchors: Jan 13 20:40:25.284741 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:40:25.284817 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:40:25.290430 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 13 20:40:25.292424 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:40:25.312125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:25.369900 kernel: SCSI subsystem initialized Jan 13 20:40:25.381914 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:40:25.393883 kernel: iscsi: registered transport (tcp) Jan 13 20:40:25.416893 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:40:25.416979 kernel: QLogic iSCSI HBA Driver Jan 13 20:40:25.470936 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:40:25.476275 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:40:25.515897 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:40:25.515985 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:40:25.516013 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:40:25.562900 kernel: raid6: avx2x4 gen() 17687 MB/s Jan 13 20:40:25.579893 kernel: raid6: avx2x2 gen() 18143 MB/s Jan 13 20:40:25.597434 kernel: raid6: avx2x1 gen() 13693 MB/s Jan 13 20:40:25.597516 kernel: raid6: using algorithm avx2x2 gen() 18143 MB/s Jan 13 20:40:25.615265 kernel: raid6: .... xor() 18541 MB/s, rmw enabled Jan 13 20:40:25.615371 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:40:25.637897 kernel: xor: automatically using best checksumming function avx Jan 13 20:40:25.803901 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:40:25.817608 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:40:25.825138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:25.858742 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 13 20:40:25.865575 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:25.875424 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:40:25.911075 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 13 20:40:25.947883 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:40:25.952080 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:40:26.061439 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:26.082103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:40:26.134253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:40:26.145358 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:40:26.176932 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:40:26.184088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:26.251028 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:40:26.251327 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 20:40:26.184368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:40:26.283093 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:40:26.305161 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:40:26.305198 kernel: AES CTR mode by8 optimization enabled Jan 13 20:40:26.315565 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:40:26.366029 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 20:40:26.412652 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 20:40:26.412960 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 20:40:26.413189 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 20:40:26.413410 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:40:26.413638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:40:26.413675 kernel: GPT:17805311 != 25165823 Jan 13 20:40:26.413838 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:40:26.413886 kernel: GPT:17805311 != 25165823 Jan 13 20:40:26.413909 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:40:26.413941 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:40:26.413965 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 20:40:26.315743 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:26.366127 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:26.401055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:40:26.513015 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) Jan 13 20:40:26.513057 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (451) Jan 13 20:40:26.401344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:26.422038 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:26.436235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:26.492756 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:40:26.525916 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 20:40:26.555939 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 20:40:26.592356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:26.611668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 20:40:26.639595 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 20:40:26.639865 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 20:40:26.670191 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:40:26.704658 disk-uuid[544]: Primary Header is updated. Jan 13 20:40:26.704658 disk-uuid[544]: Secondary Entries is updated. Jan 13 20:40:26.704658 disk-uuid[544]: Secondary Header is updated. Jan 13 20:40:26.734246 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:40:26.711149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:26.780221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:27.757354 disk-uuid[546]: The operation has completed successfully. Jan 13 20:40:27.766049 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:40:27.834440 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:40:27.834585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:40:27.860070 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:40:27.884303 sh[568]: Success Jan 13 20:40:27.906882 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:40:27.997704 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:40:28.004968 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:40:28.031413 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:40:28.078734 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:40:28.078844 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:28.078886 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:40:28.088187 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:40:28.100732 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:40:28.125896 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:40:28.131852 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:40:28.132882 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:40:28.139187 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:40:28.157210 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:40:28.223022 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:28.223076 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:28.223102 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:40:28.223134 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:40:28.223158 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:40:28.234902 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:28.250718 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:40:28.266107 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:40:28.341679 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:40:28.370150 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:40:28.470918 systemd-networkd[752]: lo: Link UP Jan 13 20:40:28.470933 systemd-networkd[752]: lo: Gained carrier Jan 13 20:40:28.475118 ignition[676]: Ignition 2.20.0 Jan 13 20:40:28.473784 systemd-networkd[752]: Enumeration completed Jan 13 20:40:28.475131 ignition[676]: Stage: fetch-offline Jan 13 20:40:28.474507 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:28.475187 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:28.474515 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:40:28.475202 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:28.474991 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:40:28.475348 ignition[676]: parsed url from cmdline: "" Jan 13 20:40:28.477381 systemd-networkd[752]: eth0: Link UP Jan 13 20:40:28.475355 ignition[676]: no config URL provided Jan 13 20:40:28.477387 systemd-networkd[752]: eth0: Gained carrier Jan 13 20:40:28.475364 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:40:28.477401 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:28.475377 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:40:28.490945 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.39/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 20:40:28.475389 ignition[676]: failed to fetch config: resource requires networking Jan 13 20:40:28.492721 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:40:28.475659 ignition[676]: Ignition finished successfully Jan 13 20:40:28.504658 systemd[1]: Reached target network.target - Network. Jan 13 20:40:28.563762 ignition[762]: Ignition 2.20.0 Jan 13 20:40:28.524081 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:40:28.563774 ignition[762]: Stage: fetch Jan 13 20:40:28.575463 unknown[762]: fetched base config from "system" Jan 13 20:40:28.564101 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:28.575478 unknown[762]: fetched base config from "system" Jan 13 20:40:28.564119 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:28.575487 unknown[762]: fetched user config from "gcp" Jan 13 20:40:28.564293 ignition[762]: parsed url from cmdline: "" Jan 13 20:40:28.578025 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:40:28.564301 ignition[762]: no config URL provided Jan 13 20:40:28.604192 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:40:28.564309 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:40:28.646313 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:40:28.564325 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:40:28.664088 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:40:28.564363 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 20:40:28.715508 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:40:28.569899 ignition[762]: GET result: OK Jan 13 20:40:28.726270 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:40:28.569991 ignition[762]: parsing config with SHA512: 981dd24e43a15a8524177f8fd5f9066e0e9da45ec6f7914f64dec58cbdf9f213b13f84d7908d57c0fd69c00f1d6fc1d80ec91cebe8171f471d652e8781a322d2 Jan 13 20:40:28.747065 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:40:28.575975 ignition[762]: fetch: fetch complete Jan 13 20:40:28.764042 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:40:28.575985 ignition[762]: fetch: fetch passed Jan 13 20:40:28.779059 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:40:28.576060 ignition[762]: Ignition finished successfully Jan 13 20:40:28.793063 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:40:28.644158 ignition[769]: Ignition 2.20.0 Jan 13 20:40:28.817099 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:40:28.644170 ignition[769]: Stage: kargs Jan 13 20:40:28.644359 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:28.644370 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:28.645110 ignition[769]: kargs: kargs passed Jan 13 20:40:28.645164 ignition[769]: Ignition finished successfully Jan 13 20:40:28.712841 ignition[774]: Ignition 2.20.0 Jan 13 20:40:28.712872 ignition[774]: Stage: disks Jan 13 20:40:28.713158 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:28.713175 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:28.714380 ignition[774]: disks: disks passed Jan 13 20:40:28.714452 ignition[774]: Ignition finished successfully Jan 13 20:40:28.878217 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:40:29.065062 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:40:29.083015 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:40:29.220970 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:40:29.221938 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:40:29.222795 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:40:29.255995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:40:29.274565 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:40:29.283448 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:40:29.337055 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (791) Jan 13 20:40:29.337110 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:29.337135 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:29.337159 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:40:29.283519 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:40:29.378063 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:40:29.378110 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:40:29.283575 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:40:29.362490 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:40:29.387396 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:40:29.411095 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:40:29.530074 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:40:29.541017 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:40:29.552036 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:40:29.561991 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:40:29.645231 systemd-networkd[752]: eth0: Gained IPv6LL Jan 13 20:40:29.693038 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:40:29.699066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:40:29.734883 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:29.745108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:40:29.755219 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:40:29.794094 ignition[903]: INFO : Ignition 2.20.0 Jan 13 20:40:29.794094 ignition[903]: INFO : Stage: mount Jan 13 20:40:29.808176 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:29.808176 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:29.808176 ignition[903]: INFO : mount: mount passed Jan 13 20:40:29.808176 ignition[903]: INFO : Ignition finished successfully Jan 13 20:40:29.798501 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:40:29.830335 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:40:29.846115 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:40:29.892104 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:40:29.938901 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (915) Jan 13 20:40:29.956621 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:29.956711 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:29.956737 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:40:29.979412 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:40:29.979495 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:40:29.982821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:40:30.022792 ignition[932]: INFO : Ignition 2.20.0 Jan 13 20:40:30.022792 ignition[932]: INFO : Stage: files Jan 13 20:40:30.037006 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:30.037006 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:30.037006 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:40:30.037006 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:40:30.037006 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:40:30.037006 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:40:30.037006 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:40:30.037006 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:40:30.032659 unknown[932]: wrote ssh authorized keys file for user: core Jan 13 20:40:30.349972 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:40:30.701207 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:40:30.719047 ignition[932]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:40:30.719047 ignition[932]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:40:30.719047 ignition[932]: INFO : files: files passed Jan 13 20:40:30.719047 ignition[932]: INFO : Ignition finished successfully Jan 13 20:40:30.703536 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:40:30.735131 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:40:30.761183 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:40:30.797444 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:40:30.869052 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:30.869052 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:30.797656 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:40:30.917107 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:30.809955 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:40:30.829193 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:40:30.855088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:40:30.941467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:40:30.941599 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:40:30.953820 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:40:30.973072 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:40:30.992191 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:40:30.999141 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:40:31.061393 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:40:31.082081 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:40:31.118953 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:31.119364 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:31.158254 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:40:31.158682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:40:31.158907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:40:31.204107 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:40:31.221196 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:40:31.230222 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:40:31.248288 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:40:31.269230 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:40:31.290355 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:40:31.311189 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:40:31.332285 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:40:31.353209 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:40:31.373259 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:40:31.391229 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:40:31.391388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:40:31.416288 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:31.436285 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:31.457140 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:40:31.457328 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:31.479226 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:40:31.479424 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:40:31.510351 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:40:31.510582 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:40:31.519406 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:40:31.519592 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:40:31.591043 ignition[984]: INFO : Ignition 2.20.0 Jan 13 20:40:31.591043 ignition[984]: INFO : Stage: umount Jan 13 20:40:31.591043 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:31.591043 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:31.591043 ignition[984]: INFO : umount: umount passed Jan 13 20:40:31.591043 ignition[984]: INFO : Ignition finished successfully Jan 13 20:40:31.545270 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:40:31.576256 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:40:31.576539 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:31.609181 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:40:31.623011 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:40:31.623308 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:31.651577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:40:31.651805 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:40:31.693200 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:40:31.694357 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:40:31.694477 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:40:31.709754 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:40:31.709890 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:40:31.732324 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:40:31.732449 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:40:31.751983 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:40:31.752050 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:40:31.769183 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:40:31.769254 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:40:31.789215 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:40:31.789288 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:40:31.799283 systemd[1]: Stopped target network.target - Network. Jan 13 20:40:31.816202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:40:31.816287 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:40:31.831309 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:40:31.849198 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:40:31.852982 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:31.864231 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:40:31.882222 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:40:31.897248 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:40:31.897309 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:40:31.922204 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:40:31.922270 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:40:31.933268 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:40:31.933352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:40:31.967359 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:40:31.967461 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:40:31.976462 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:40:31.976564 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:40:32.003667 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:40:32.009987 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 13 20:40:32.030411 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:40:32.055842 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:40:32.056082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:40:32.077649 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:40:32.077962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:40:32.086479 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:40:32.086548 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:32.131113 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:40:32.149007 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:40:32.149251 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:40:32.168346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:40:32.168446 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:32.187326 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:40:32.187430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:32.205268 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:40:32.205368 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:32.226481 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:32.235131 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:40:32.235309 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:32.632916 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 20:40:32.279453 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:40:32.279541 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:32.300251 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:40:32.300324 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:32.320262 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:40:32.320368 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:40:32.347361 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:40:32.347479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:40:32.375394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:40:32.375512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:32.430162 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:40:32.451009 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:40:32.451249 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:32.472250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:40:32.472349 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:32.495827 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:40:32.496019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:40:32.513552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:40:32.513703 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:40:32.535837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:40:32.561294 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:40:32.600171 systemd[1]: Switching root. Jan 13 20:40:32.850048 systemd-journald[184]: Journal stopped Jan 13 20:40:25.097586 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 13 20:40:25.097659 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:25.097702 kernel: BIOS-provided physical RAM map: Jan 13 20:40:25.097727 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 20:40:25.097739 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 20:40:25.097752 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 20:40:25.097768 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 20:40:25.097783 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 20:40:25.097802 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd325fff] usable Jan 13 20:40:25.097816 kernel: BIOS-e820: [mem 0x00000000bd326000-0x00000000bd32dfff] ACPI data Jan 13 20:40:25.097828 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Jan 13 20:40:25.097843 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 13 20:40:25.097882 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 20:40:25.097896 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 20:40:25.097918 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 20:40:25.097933 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 20:40:25.097949 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 20:40:25.097971 kernel: NX (Execute Disable) protection: active Jan 13 20:40:25.097993 kernel: APIC: Static calls initialized Jan 13 20:40:25.098014 kernel: efi: EFI v2.7 by EDK II Jan 13 20:40:25.098028 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd326018 Jan 13 20:40:25.098043 kernel: random: crng init done Jan 13 20:40:25.098058 kernel: secureboot: Secure boot disabled Jan 13 20:40:25.098072 kernel: SMBIOS 2.4 present. Jan 13 20:40:25.098093 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 20:40:25.098109 kernel: Hypervisor detected: KVM Jan 13 20:40:25.098123 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 20:40:25.098138 kernel: kvm-clock: using sched offset of 13043145796 cycles Jan 13 20:40:25.098154 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 20:40:25.098170 kernel: tsc: Detected 2299.998 MHz processor Jan 13 20:40:25.098187 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 20:40:25.098203 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 20:40:25.098219 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 20:40:25.098234 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 20:40:25.098253 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 20:40:25.098267 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 20:40:25.098281 kernel: Using GB pages for direct mapping Jan 13 20:40:25.098295 kernel: ACPI: Early table checksum verification disabled Jan 13 20:40:25.098310 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 20:40:25.098326 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 20:40:25.098348 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 20:40:25.098368 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 20:40:25.098384 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 20:40:25.098400 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 20:40:25.098418 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 20:40:25.098433 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 20:40:25.098449 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 20:40:25.098466 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 20:40:25.098488 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 20:40:25.098505 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 20:40:25.098523 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 20:40:25.098541 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 20:40:25.098558 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 20:40:25.098575 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 20:40:25.098592 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 20:40:25.098609 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 20:40:25.098627 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 20:40:25.098648 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 20:40:25.098665 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 20:40:25.098682 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 20:40:25.098700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 20:40:25.098717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 20:40:25.098735 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 20:40:25.098752 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 20:40:25.098770 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 20:40:25.098787 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 13 20:40:25.098809 kernel: Zone ranges: Jan 13 20:40:25.098826 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 20:40:25.098844 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 20:40:25.098920 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 20:40:25.098936 kernel: Movable zone start for each node Jan 13 20:40:25.098961 kernel: Early memory node ranges Jan 13 20:40:25.098977 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 20:40:25.098991 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 20:40:25.099006 kernel: node 0: [mem 0x0000000000100000-0x00000000bd325fff] Jan 13 20:40:25.099028 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Jan 13 20:40:25.099046 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 20:40:25.099062 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 20:40:25.099078 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 20:40:25.099094 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 20:40:25.099109 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 20:40:25.099123 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 20:40:25.099139 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Jan 13 20:40:25.099155 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 20:40:25.099176 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 20:40:25.099193 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 20:40:25.099209 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 20:40:25.099226 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 20:40:25.099243 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 20:40:25.099260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 20:40:25.099276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 20:40:25.099293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 20:40:25.099310 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 20:40:25.099331 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 20:40:25.099348 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 20:40:25.099364 kernel: Booting paravirtualized kernel on KVM Jan 13 20:40:25.099382 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 20:40:25.099399 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 20:40:25.099416 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 20:40:25.099433 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 20:40:25.099448 kernel: pcpu-alloc: [0] 0 1 Jan 13 20:40:25.099464 kernel: kvm-guest: PV spinlocks enabled Jan 13 20:40:25.099485 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 20:40:25.099503 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:25.099521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:40:25.099536 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 20:40:25.099554 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:40:25.099569 kernel: Fallback order for Node 0: 0 Jan 13 20:40:25.099586 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Jan 13 20:40:25.099604 kernel: Policy zone: Normal Jan 13 20:40:25.099626 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:40:25.099642 kernel: software IO TLB: area num 2. Jan 13 20:40:25.099659 kernel: Memory: 7511308K/7860552K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 348988K reserved, 0K cma-reserved) Jan 13 20:40:25.099676 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:40:25.099694 kernel: Kernel/User page tables isolation: enabled Jan 13 20:40:25.099712 kernel: ftrace: allocating 37890 entries in 149 pages Jan 13 20:40:25.099729 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 20:40:25.099745 kernel: Dynamic Preempt: voluntary Jan 13 20:40:25.099779 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:40:25.099797 kernel: rcu: RCU event tracing is enabled. Jan 13 20:40:25.099814 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:40:25.099831 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:40:25.099869 kernel: Rude variant of Tasks RCU enabled. Jan 13 20:40:25.099887 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:40:25.099946 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:40:25.099972 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:40:25.099988 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 20:40:25.100010 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:40:25.100029 kernel: Console: colour dummy device 80x25 Jan 13 20:40:25.100046 kernel: printk: console [ttyS0] enabled Jan 13 20:40:25.100065 kernel: ACPI: Core revision 20230628 Jan 13 20:40:25.100082 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 20:40:25.100100 kernel: x2apic enabled Jan 13 20:40:25.100118 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 20:40:25.100136 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 20:40:25.100155 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 20:40:25.100178 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 20:40:25.100196 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 20:40:25.100214 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 20:40:25.100232 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 20:40:25.100248 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 20:40:25.100266 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 20:40:25.100283 kernel: Spectre V2 : Mitigation: IBRS Jan 13 20:40:25.100301 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 20:40:25.100319 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 20:40:25.100340 kernel: RETBleed: Mitigation: IBRS Jan 13 20:40:25.100359 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 20:40:25.100376 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 20:40:25.100394 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 20:40:25.100412 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 20:40:25.100430 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 20:40:25.100448 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 20:40:25.100466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 20:40:25.100483 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 20:40:25.100505 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 20:40:25.100522 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 20:40:25.100540 kernel: Freeing SMP alternatives memory: 32K Jan 13 20:40:25.100558 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:40:25.100575 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:40:25.100593 kernel: landlock: Up and running. Jan 13 20:40:25.100610 kernel: SELinux: Initializing. Jan 13 20:40:25.100628 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:40:25.100645 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 20:40:25.100666 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 20:40:25.100684 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:40:25.100702 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:40:25.100720 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:40:25.100737 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 20:40:25.100755 kernel: signal: max sigframe size: 1776 Jan 13 20:40:25.100772 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:40:25.100790 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:40:25.100812 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 20:40:25.100829 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:40:25.100847 kernel: smpboot: x86: Booting SMP configuration: Jan 13 20:40:25.100879 kernel: .... node #0, CPUs: #1 Jan 13 20:40:25.100898 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 20:40:25.100916 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 20:40:25.100931 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:40:25.100948 kernel: smpboot: Max logical packages: 1 Jan 13 20:40:25.100971 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 20:40:25.100992 kernel: devtmpfs: initialized Jan 13 20:40:25.101008 kernel: x86/mm: Memory block size: 128MB Jan 13 20:40:25.101024 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 20:40:25.101040 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:40:25.101056 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:40:25.101071 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:40:25.101088 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:40:25.101104 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:40:25.101122 kernel: audit: type=2000 audit(1736800824.262:1): state=initialized audit_enabled=0 res=1 Jan 13 20:40:25.101145 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:40:25.101163 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 20:40:25.101181 kernel: cpuidle: using governor menu Jan 13 20:40:25.101198 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:40:25.101215 kernel: dca service started, version 1.12.1 Jan 13 20:40:25.101230 kernel: PCI: Using configuration type 1 for base access Jan 13 20:40:25.101247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 20:40:25.101263 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:40:25.101281 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:40:25.101303 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:40:25.101321 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:40:25.101341 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:40:25.101357 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:40:25.101375 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:40:25.101393 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:40:25.101411 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 20:40:25.101429 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 20:40:25.101446 kernel: ACPI: Interpreter enabled Jan 13 20:40:25.101468 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 20:40:25.101485 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 20:40:25.101503 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 20:40:25.101520 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 20:40:25.101538 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 20:40:25.101556 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:40:25.101845 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:40:25.103155 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 20:40:25.103338 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 20:40:25.103360 kernel: PCI host bridge to bus 0000:00 Jan 13 20:40:25.103529 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 20:40:25.103687 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 20:40:25.103843 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 20:40:25.105123 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 20:40:25.105315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:40:25.105542 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 20:40:25.105754 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 20:40:25.106009 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 20:40:25.106209 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 20:40:25.106418 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 20:40:25.106623 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 20:40:25.106820 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 20:40:25.108141 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 20:40:25.108342 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 20:40:25.108531 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 20:40:25.108724 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:40:25.110797 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 20:40:25.111081 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 20:40:25.111108 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 20:40:25.111127 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 20:40:25.111146 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 20:40:25.111164 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 20:40:25.111183 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 20:40:25.111201 kernel: iommu: Default domain type: Translated Jan 13 20:40:25.111220 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 20:40:25.111239 kernel: efivars: Registered efivars operations Jan 13 20:40:25.111263 kernel: PCI: Using ACPI for IRQ routing Jan 13 20:40:25.111281 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 20:40:25.111300 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 20:40:25.111318 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 20:40:25.111336 kernel: e820: reserve RAM buffer [mem 0xbd326000-0xbfffffff] Jan 13 20:40:25.111354 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 20:40:25.111372 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 20:40:25.111390 kernel: vgaarb: loaded Jan 13 20:40:25.111408 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 20:40:25.111431 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:40:25.111450 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:40:25.111468 kernel: pnp: PnP ACPI init Jan 13 20:40:25.111487 kernel: pnp: PnP ACPI: found 7 devices Jan 13 20:40:25.111506 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 20:40:25.111525 kernel: NET: Registered PF_INET protocol family Jan 13 20:40:25.111544 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 20:40:25.111563 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 20:40:25.111581 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:40:25.111604 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:40:25.111623 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 20:40:25.111641 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 20:40:25.111660 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 20:40:25.111679 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 20:40:25.111697 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:40:25.111716 kernel: NET: Registered PF_XDP protocol family Jan 13 20:40:25.111944 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 20:40:25.112126 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 20:40:25.112301 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 20:40:25.112464 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 20:40:25.112649 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 20:40:25.112674 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:40:25.112693 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 20:40:25.112713 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 20:40:25.112731 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 20:40:25.112756 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 20:40:25.112775 kernel: clocksource: Switched to clocksource tsc Jan 13 20:40:25.112793 kernel: Initialise system trusted keyrings Jan 13 20:40:25.112811 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 20:40:25.112830 kernel: Key type asymmetric registered Jan 13 20:40:25.112848 kernel: Asymmetric key parser 'x509' registered Jan 13 20:40:25.112881 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 20:40:25.112898 kernel: io scheduler mq-deadline registered Jan 13 20:40:25.112915 kernel: io scheduler kyber registered Jan 13 20:40:25.112937 kernel: io scheduler bfq registered Jan 13 20:40:25.112962 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 20:40:25.112980 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 20:40:25.113175 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 20:40:25.113198 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 20:40:25.113380 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 20:40:25.113404 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 20:40:25.113585 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 20:40:25.113615 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:40:25.113632 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 20:40:25.113651 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 20:40:25.113668 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 20:40:25.113686 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 20:40:25.115349 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 20:40:25.115385 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 20:40:25.115405 kernel: i8042: Warning: Keylock active Jan 13 20:40:25.115430 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 20:40:25.115448 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 20:40:25.115652 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 20:40:25.115829 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 20:40:25.117106 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T20:40:24 UTC (1736800824) Jan 13 20:40:25.117306 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 20:40:25.117332 kernel: intel_pstate: CPU model not supported Jan 13 20:40:25.117352 kernel: pstore: Using crash dump compression: deflate Jan 13 20:40:25.117379 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 20:40:25.117398 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:40:25.117417 kernel: Segment Routing with IPv6 Jan 13 20:40:25.117435 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:40:25.117454 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:40:25.117473 kernel: Key type dns_resolver registered Jan 13 20:40:25.117493 kernel: IPI shorthand broadcast: enabled Jan 13 20:40:25.117513 kernel: sched_clock: Marking stable (834004097, 139188186)->(997038673, -23846390) Jan 13 20:40:25.117533 kernel: registered taskstats version 1 Jan 13 20:40:25.117557 kernel: Loading compiled-in X.509 certificates Jan 13 20:40:25.117576 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 13 20:40:25.117594 kernel: Key type .fscrypt registered Jan 13 20:40:25.117613 kernel: Key type fscrypt-provisioning registered Jan 13 20:40:25.117632 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:40:25.117651 kernel: ima: No architecture policies found Jan 13 20:40:25.117671 kernel: clk: Disabling unused clocks Jan 13 20:40:25.117690 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 13 20:40:25.117709 kernel: Write protecting the kernel read-only data: 38912k Jan 13 20:40:25.117732 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 13 20:40:25.117750 kernel: Run /init as init process Jan 13 20:40:25.117769 kernel: with arguments: Jan 13 20:40:25.117787 kernel: /init Jan 13 20:40:25.117806 kernel: with environment: Jan 13 20:40:25.117824 kernel: HOME=/ Jan 13 20:40:25.117843 kernel: TERM=linux Jan 13 20:40:25.119902 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:40:25.119927 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 20:40:25.119965 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:40:25.119989 systemd[1]: Detected virtualization google. Jan 13 20:40:25.120009 systemd[1]: Detected architecture x86-64. Jan 13 20:40:25.120028 systemd[1]: Running in initrd. Jan 13 20:40:25.120047 systemd[1]: No hostname configured, using default hostname. Jan 13 20:40:25.120066 systemd[1]: Hostname set to . Jan 13 20:40:25.120086 systemd[1]: Initializing machine ID from random generator. Jan 13 20:40:25.120110 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:40:25.120129 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:25.120149 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:25.120170 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:40:25.120189 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:40:25.120208 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:40:25.120229 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:40:25.120257 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:40:25.120293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:40:25.120318 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:25.120339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:25.120359 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:40:25.120379 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:40:25.120403 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:40:25.120423 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:40:25.120443 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:40:25.120463 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:40:25.120484 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:40:25.120504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:40:25.120524 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:25.120545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:25.120569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:25.120589 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:40:25.120609 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:40:25.120630 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:40:25.120651 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:40:25.120671 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:40:25.120695 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:40:25.120716 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:40:25.120736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:25.120759 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:40:25.120780 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:25.120839 systemd-journald[184]: Collecting audit messages is disabled. Jan 13 20:40:25.120912 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:40:25.120939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:40:25.120968 systemd-journald[184]: Journal started Jan 13 20:40:25.121013 systemd-journald[184]: Runtime Journal (/run/log/journal/b6f300229d7c40e0a8576a9a8112954b) is 8.0M, max 148.6M, 140.6M free. Jan 13 20:40:25.098263 systemd-modules-load[185]: Inserted module 'overlay' Jan 13 20:40:25.124820 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:40:25.132674 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:40:25.141553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:25.153077 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:25.160665 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:40:25.170033 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:40:25.170074 kernel: Bridge firewalling registered Jan 13 20:40:25.166500 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 13 20:40:25.171314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:40:25.172698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:25.188812 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:25.190466 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:25.198735 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:25.206262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:25.215244 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:25.224074 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:40:25.234069 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:40:25.256778 dracut-cmdline[219]: dracut-dracut-053 Jan 13 20:40:25.261132 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 13 20:40:25.284721 systemd-resolved[220]: Positive Trust Anchors: Jan 13 20:40:25.284741 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:40:25.284817 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:40:25.290430 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 13 20:40:25.292424 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:40:25.312125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:25.369900 kernel: SCSI subsystem initialized Jan 13 20:40:25.381914 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:40:25.393883 kernel: iscsi: registered transport (tcp) Jan 13 20:40:25.416893 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:40:25.416979 kernel: QLogic iSCSI HBA Driver Jan 13 20:40:25.470936 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:40:25.476275 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:40:25.515897 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:40:25.515985 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:40:25.516013 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:40:25.562900 kernel: raid6: avx2x4 gen() 17687 MB/s Jan 13 20:40:25.579893 kernel: raid6: avx2x2 gen() 18143 MB/s Jan 13 20:40:25.597434 kernel: raid6: avx2x1 gen() 13693 MB/s Jan 13 20:40:25.597516 kernel: raid6: using algorithm avx2x2 gen() 18143 MB/s Jan 13 20:40:25.615265 kernel: raid6: .... xor() 18541 MB/s, rmw enabled Jan 13 20:40:25.615371 kernel: raid6: using avx2x2 recovery algorithm Jan 13 20:40:25.637897 kernel: xor: automatically using best checksumming function avx Jan 13 20:40:25.803901 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:40:25.817608 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:40:25.825138 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:25.858742 systemd-udevd[403]: Using default interface naming scheme 'v255'. Jan 13 20:40:25.865575 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:25.875424 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:40:25.911075 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 13 20:40:25.947883 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:40:25.952080 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:40:26.061439 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:26.082103 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:40:26.134253 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:40:26.145358 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:40:26.176932 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 20:40:26.184088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:26.251028 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:40:26.251327 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 20:40:26.184368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:40:26.283093 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:40:26.305161 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 20:40:26.305198 kernel: AES CTR mode by8 optimization enabled Jan 13 20:40:26.315565 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:40:26.366029 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 20:40:26.412652 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 20:40:26.412960 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 20:40:26.413189 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 20:40:26.413410 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:40:26.413638 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:40:26.413675 kernel: GPT:17805311 != 25165823 Jan 13 20:40:26.413838 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:40:26.413886 kernel: GPT:17805311 != 25165823 Jan 13 20:40:26.413909 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:40:26.413941 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:40:26.413965 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 20:40:26.315743 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:26.366127 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:26.401055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:40:26.513015 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) Jan 13 20:40:26.513057 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/sda3 scanned by (udev-worker) (451) Jan 13 20:40:26.401344 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:26.422038 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:26.436235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:26.492756 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:40:26.525916 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 20:40:26.555939 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 20:40:26.592356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:26.611668 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 20:40:26.639595 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 20:40:26.639865 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 20:40:26.670191 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:40:26.704658 disk-uuid[544]: Primary Header is updated. Jan 13 20:40:26.704658 disk-uuid[544]: Secondary Entries is updated. Jan 13 20:40:26.704658 disk-uuid[544]: Secondary Header is updated. Jan 13 20:40:26.734246 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:40:26.711149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:40:26.780221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:27.757354 disk-uuid[546]: The operation has completed successfully. Jan 13 20:40:27.766049 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:40:27.834440 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:40:27.834585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:40:27.860070 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:40:27.884303 sh[568]: Success Jan 13 20:40:27.906882 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 20:40:27.997704 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:40:28.004968 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:40:28.031413 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:40:28.078734 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 13 20:40:28.078844 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:28.078886 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:40:28.088187 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:40:28.100732 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:40:28.125896 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:40:28.131852 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:40:28.132882 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:40:28.139187 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:40:28.157210 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:40:28.223022 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:28.223076 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:28.223102 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:40:28.223134 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:40:28.223158 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:40:28.234902 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:28.250718 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:40:28.266107 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:40:28.341679 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:40:28.370150 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:40:28.470918 systemd-networkd[752]: lo: Link UP Jan 13 20:40:28.470933 systemd-networkd[752]: lo: Gained carrier Jan 13 20:40:28.475118 ignition[676]: Ignition 2.20.0 Jan 13 20:40:28.473784 systemd-networkd[752]: Enumeration completed Jan 13 20:40:28.475131 ignition[676]: Stage: fetch-offline Jan 13 20:40:28.474507 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:28.475187 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:28.474515 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:40:28.475202 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:28.474991 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:40:28.475348 ignition[676]: parsed url from cmdline: "" Jan 13 20:40:28.477381 systemd-networkd[752]: eth0: Link UP Jan 13 20:40:28.475355 ignition[676]: no config URL provided Jan 13 20:40:28.477387 systemd-networkd[752]: eth0: Gained carrier Jan 13 20:40:28.475364 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:40:28.477401 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:28.475377 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:40:28.490945 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.39/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 20:40:28.475389 ignition[676]: failed to fetch config: resource requires networking Jan 13 20:40:28.492721 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:40:28.475659 ignition[676]: Ignition finished successfully Jan 13 20:40:28.504658 systemd[1]: Reached target network.target - Network. Jan 13 20:40:28.563762 ignition[762]: Ignition 2.20.0 Jan 13 20:40:28.524081 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:40:28.563774 ignition[762]: Stage: fetch Jan 13 20:40:28.575463 unknown[762]: fetched base config from "system" Jan 13 20:40:28.564101 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:28.575478 unknown[762]: fetched base config from "system" Jan 13 20:40:28.564119 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:28.575487 unknown[762]: fetched user config from "gcp" Jan 13 20:40:28.564293 ignition[762]: parsed url from cmdline: "" Jan 13 20:40:28.578025 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:40:28.564301 ignition[762]: no config URL provided Jan 13 20:40:28.604192 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:40:28.564309 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:40:28.646313 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:40:28.564325 ignition[762]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:40:28.664088 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:40:28.564363 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 20:40:28.715508 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:40:28.569899 ignition[762]: GET result: OK Jan 13 20:40:28.726270 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:40:28.569991 ignition[762]: parsing config with SHA512: 981dd24e43a15a8524177f8fd5f9066e0e9da45ec6f7914f64dec58cbdf9f213b13f84d7908d57c0fd69c00f1d6fc1d80ec91cebe8171f471d652e8781a322d2 Jan 13 20:40:28.747065 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:40:28.575975 ignition[762]: fetch: fetch complete Jan 13 20:40:28.764042 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:40:28.575985 ignition[762]: fetch: fetch passed Jan 13 20:40:28.779059 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:40:28.576060 ignition[762]: Ignition finished successfully Jan 13 20:40:28.793063 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:40:28.644158 ignition[769]: Ignition 2.20.0 Jan 13 20:40:28.817099 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:40:28.644170 ignition[769]: Stage: kargs Jan 13 20:40:28.644359 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:28.644370 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:28.645110 ignition[769]: kargs: kargs passed Jan 13 20:40:28.645164 ignition[769]: Ignition finished successfully Jan 13 20:40:28.712841 ignition[774]: Ignition 2.20.0 Jan 13 20:40:28.712872 ignition[774]: Stage: disks Jan 13 20:40:28.713158 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:28.713175 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:28.714380 ignition[774]: disks: disks passed Jan 13 20:40:28.714452 ignition[774]: Ignition finished successfully Jan 13 20:40:28.878217 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:40:29.065062 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:40:29.083015 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:40:29.220970 kernel: EXT4-fs (sda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 13 20:40:29.221938 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:40:29.222795 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:40:29.255995 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:40:29.274565 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:40:29.283448 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:40:29.337055 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (791) Jan 13 20:40:29.337110 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:29.337135 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:29.337159 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:40:29.283519 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:40:29.378063 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:40:29.378110 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:40:29.283575 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:40:29.362490 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:40:29.387396 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:40:29.411095 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:40:29.530074 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:40:29.541017 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:40:29.552036 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:40:29.561991 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:40:29.645231 systemd-networkd[752]: eth0: Gained IPv6LL Jan 13 20:40:29.693038 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:40:29.699066 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:40:29.734883 kernel: BTRFS info (device sda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:29.745108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:40:29.755219 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:40:29.794094 ignition[903]: INFO : Ignition 2.20.0 Jan 13 20:40:29.794094 ignition[903]: INFO : Stage: mount Jan 13 20:40:29.808176 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:29.808176 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:29.808176 ignition[903]: INFO : mount: mount passed Jan 13 20:40:29.808176 ignition[903]: INFO : Ignition finished successfully Jan 13 20:40:29.798501 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:40:29.830335 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:40:29.846115 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:40:29.892104 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:40:29.938901 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (915) Jan 13 20:40:29.956621 kernel: BTRFS info (device sda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 13 20:40:29.956711 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 20:40:29.956737 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:40:29.979412 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:40:29.979495 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:40:29.982821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:40:30.022792 ignition[932]: INFO : Ignition 2.20.0 Jan 13 20:40:30.022792 ignition[932]: INFO : Stage: files Jan 13 20:40:30.037006 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:30.037006 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:30.037006 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:40:30.037006 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:40:30.037006 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:40:30.037006 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:40:30.037006 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:40:30.037006 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:40:30.037006 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 20:40:30.032659 unknown[932]: wrote ssh authorized keys file for user: core Jan 13 20:40:30.349972 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:40:30.701207 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 20:40:30.719047 ignition[932]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:40:30.719047 ignition[932]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:40:30.719047 ignition[932]: INFO : files: files passed Jan 13 20:40:30.719047 ignition[932]: INFO : Ignition finished successfully Jan 13 20:40:30.703536 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:40:30.735131 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:40:30.761183 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:40:30.797444 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:40:30.869052 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:30.869052 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:30.797656 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:40:30.917107 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:40:30.809955 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:40:30.829193 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:40:30.855088 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:40:30.941467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:40:30.941599 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:40:30.953820 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:40:30.973072 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:40:30.992191 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:40:30.999141 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:40:31.061393 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:40:31.082081 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:40:31.118953 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:31.119364 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:31.158254 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:40:31.158682 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:40:31.158907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:40:31.204107 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:40:31.221196 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:40:31.230222 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:40:31.248288 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:40:31.269230 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:40:31.290355 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:40:31.311189 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:40:31.332285 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:40:31.353209 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:40:31.373259 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:40:31.391229 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:40:31.391388 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:40:31.416288 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:31.436285 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:31.457140 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:40:31.457328 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:31.479226 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:40:31.479424 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:40:31.510351 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:40:31.510582 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:40:31.519406 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:40:31.519592 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:40:31.591043 ignition[984]: INFO : Ignition 2.20.0 Jan 13 20:40:31.591043 ignition[984]: INFO : Stage: umount Jan 13 20:40:31.591043 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:40:31.591043 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 20:40:31.591043 ignition[984]: INFO : umount: umount passed Jan 13 20:40:31.591043 ignition[984]: INFO : Ignition finished successfully Jan 13 20:40:31.545270 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:40:31.576256 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:40:31.576539 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:31.609181 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:40:31.623011 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:40:31.623308 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:31.651577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:40:31.651805 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:40:31.693200 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:40:31.694357 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:40:31.694477 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:40:31.709754 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:40:31.709890 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:40:31.732324 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:40:31.732449 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:40:31.751983 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:40:31.752050 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:40:31.769183 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:40:31.769254 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:40:31.789215 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:40:31.789288 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:40:31.799283 systemd[1]: Stopped target network.target - Network. Jan 13 20:40:31.816202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:40:31.816287 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:40:31.831309 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:40:31.849198 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:40:31.852982 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:31.864231 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:40:31.882222 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:40:31.897248 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:40:31.897309 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:40:31.922204 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:40:31.922270 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:40:31.933268 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:40:31.933352 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:40:31.967359 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:40:31.967461 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:40:31.976462 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:40:31.976564 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:40:32.003667 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:40:32.009987 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 13 20:40:32.030411 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:40:32.055842 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:40:32.056082 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:40:32.077649 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:40:32.077962 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:40:32.086479 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:40:32.086548 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:32.131113 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:40:32.149007 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:40:32.149251 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:40:32.168346 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:40:32.168446 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:32.187326 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:40:32.187430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:32.205268 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:40:32.205368 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:32.226481 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:32.235131 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:40:32.235309 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:32.632916 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 13 20:40:32.279453 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:40:32.279541 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:32.300251 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:40:32.300324 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:32.320262 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:40:32.320368 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:40:32.347361 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:40:32.347479 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:40:32.375394 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:40:32.375512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:40:32.430162 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:40:32.451009 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:40:32.451249 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:32.472250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:40:32.472349 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:32.495827 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:40:32.496019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:40:32.513552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:40:32.513703 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:40:32.535837 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:40:32.561294 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:40:32.600171 systemd[1]: Switching root. Jan 13 20:40:32.850048 systemd-journald[184]: Journal stopped Jan 13 20:40:35.327120 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:40:35.327184 kernel: SELinux: policy capability open_perms=1 Jan 13 20:40:35.327208 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:40:35.327226 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:40:35.327245 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:40:35.327263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:40:35.327286 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:40:35.327305 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:40:35.327329 kernel: audit: type=1403 audit(1736800833.165:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:40:35.327356 systemd[1]: Successfully loaded SELinux policy in 82.892ms. Jan 13 20:40:35.327381 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.673ms. Jan 13 20:40:35.327405 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:40:35.327426 systemd[1]: Detected virtualization google. Jan 13 20:40:35.327447 systemd[1]: Detected architecture x86-64. Jan 13 20:40:35.327483 systemd[1]: Detected first boot. Jan 13 20:40:35.327507 systemd[1]: Initializing machine ID from random generator. Jan 13 20:40:35.327529 zram_generator::config[1025]: No configuration found. Jan 13 20:40:35.327552 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:40:35.327575 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:40:35.327602 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:40:35.327623 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:40:35.327647 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:40:35.327669 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:40:35.327691 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:40:35.327714 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:40:35.327737 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:40:35.327764 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:40:35.327788 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:40:35.327810 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:40:35.327832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:40:35.327952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:40:35.327983 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:40:35.328006 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:40:35.328028 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:40:35.328057 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:40:35.328090 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 20:40:35.328111 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:40:35.328134 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:40:35.328156 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:40:35.328179 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:40:35.328209 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:40:35.328231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:40:35.328254 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:40:35.328282 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:40:35.328305 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:40:35.328328 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:40:35.328352 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:40:35.328375 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:40:35.328398 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:40:35.328421 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:40:35.328452 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:40:35.328483 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:40:35.328507 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:40:35.328530 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:40:35.328554 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:35.328582 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:40:35.328607 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:40:35.328630 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:40:35.328654 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:40:35.328677 systemd[1]: Reached target machines.target - Containers. Jan 13 20:40:35.328700 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:40:35.328724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:40:35.328748 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:40:35.328775 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:40:35.328798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:40:35.328821 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:40:35.328844 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:40:35.328903 kernel: ACPI: bus type drm_connector registered Jan 13 20:40:35.328926 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:40:35.328947 kernel: fuse: init (API version 7.39) Jan 13 20:40:35.328967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:40:35.328989 kernel: loop: module loaded Jan 13 20:40:35.329015 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:40:35.329036 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:40:35.329064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:40:35.329086 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:40:35.329110 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:40:35.329133 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:40:35.329172 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:40:35.329196 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:40:35.329264 systemd-journald[1112]: Collecting audit messages is disabled. Jan 13 20:40:35.329312 systemd-journald[1112]: Journal started Jan 13 20:40:35.329361 systemd-journald[1112]: Runtime Journal (/run/log/journal/291ce237f8574f8e8b9141933d1e82bb) is 8.0M, max 148.6M, 140.6M free. Jan 13 20:40:35.333662 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:40:34.078285 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:40:34.101281 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:40:34.101879 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:40:35.375977 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:40:35.376085 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:40:35.376896 systemd[1]: Stopped verity-setup.service. Jan 13 20:40:35.415032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:35.429067 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:40:35.440433 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:40:35.451257 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:40:35.462314 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:40:35.472245 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:40:35.482226 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:40:35.493292 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:40:35.504484 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:40:35.516495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:40:35.529466 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:40:35.529711 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:40:35.541416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:40:35.541673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:40:35.553405 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:40:35.553676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:40:35.564364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:40:35.564625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:40:35.576366 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:40:35.576593 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:40:35.586382 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:40:35.586616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:40:35.596387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:40:35.606402 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:40:35.618372 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:40:35.630352 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:40:35.655299 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:40:35.670027 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:40:35.695968 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:40:35.706112 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:40:35.706365 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:40:35.717526 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:40:35.741193 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:40:35.753550 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:40:35.763267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:40:35.768425 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:40:35.785239 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:40:35.796106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:40:35.805206 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:40:35.818619 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:40:35.822961 systemd-journald[1112]: Time spent on flushing to /var/log/journal/291ce237f8574f8e8b9141933d1e82bb is 66.953ms for 911 entries. Jan 13 20:40:35.822961 systemd-journald[1112]: System Journal (/var/log/journal/291ce237f8574f8e8b9141933d1e82bb) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:40:35.938252 systemd-journald[1112]: Received client request to flush runtime journal. Jan 13 20:40:35.938332 kernel: loop0: detected capacity change from 0 to 52184 Jan 13 20:40:35.829092 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:40:35.851502 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:40:35.872108 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:40:35.888154 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:40:35.910994 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:40:35.922316 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:40:35.934418 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:40:35.946730 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:40:35.959644 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:40:35.975949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:40:35.994900 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:40:36.006027 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:40:36.028295 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:40:36.041539 kernel: loop1: detected capacity change from 0 to 205544 Jan 13 20:40:36.046557 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:40:36.082058 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:40:36.093524 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:40:36.094601 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:40:36.118455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:40:36.132147 kernel: loop2: detected capacity change from 0 to 138184 Jan 13 20:40:36.184709 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jan 13 20:40:36.184750 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jan 13 20:40:36.197778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:40:36.225078 kernel: loop3: detected capacity change from 0 to 141000 Jan 13 20:40:36.350433 kernel: loop4: detected capacity change from 0 to 52184 Jan 13 20:40:36.403998 kernel: loop5: detected capacity change from 0 to 205544 Jan 13 20:40:36.446466 kernel: loop6: detected capacity change from 0 to 138184 Jan 13 20:40:36.510224 kernel: loop7: detected capacity change from 0 to 141000 Jan 13 20:40:36.570182 (sd-merge)[1168]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 13 20:40:36.571105 (sd-merge)[1168]: Merged extensions into '/usr'. Jan 13 20:40:36.585205 systemd[1]: Reloading requested from client PID 1143 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:40:36.585614 systemd[1]: Reloading... Jan 13 20:40:36.715217 zram_generator::config[1190]: No configuration found. Jan 13 20:40:37.019478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:40:37.025937 ldconfig[1138]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:40:37.125309 systemd[1]: Reloading finished in 538 ms. Jan 13 20:40:37.153841 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:40:37.164679 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:40:37.185116 systemd[1]: Starting ensure-sysext.service... Jan 13 20:40:37.193965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:40:37.228944 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:40:37.228982 systemd[1]: Reloading... Jan 13 20:40:37.259545 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:40:37.260634 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:40:37.262437 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:40:37.263279 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 13 20:40:37.263487 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jan 13 20:40:37.272158 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:40:37.272911 systemd-tmpfiles[1235]: Skipping /boot Jan 13 20:40:37.297701 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:40:37.297733 systemd-tmpfiles[1235]: Skipping /boot Jan 13 20:40:37.365900 zram_generator::config[1261]: No configuration found. Jan 13 20:40:37.500520 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:40:37.567095 systemd[1]: Reloading finished in 337 ms. Jan 13 20:40:37.587096 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:40:37.602552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:40:37.628191 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:40:37.641585 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:40:37.665637 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:40:37.685084 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:40:37.706912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:40:37.723516 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:40:37.728880 augenrules[1328]: No rules Jan 13 20:40:37.736916 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:40:37.738384 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:40:37.768509 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:40:37.779681 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:40:37.780615 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jan 13 20:40:37.811907 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:40:37.834398 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:37.843268 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:40:37.853297 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:40:37.859313 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:40:37.878167 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:40:37.894129 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:40:37.912149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:40:37.932094 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 20:40:37.942170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:40:37.942301 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:40:37.962079 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:40:37.972008 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 20:40:37.972701 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:40:37.982426 augenrules[1337]: /sbin/augenrules: No change Jan 13 20:40:37.984375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:40:37.997540 systemd[1]: Finished ensure-sysext.service. Jan 13 20:40:38.006473 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:40:38.018559 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:40:38.019322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:40:38.024744 augenrules[1386]: No rules Jan 13 20:40:38.031583 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:40:38.037078 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:40:38.047728 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:40:38.049199 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:40:38.059563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:40:38.061031 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:40:38.072515 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:40:38.073571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:40:38.098949 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:40:38.138488 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 20:40:38.160013 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 20:40:38.177037 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 20:40:38.179233 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 13 20:40:38.183259 systemd-resolved[1320]: Positive Trust Anchors: Jan 13 20:40:38.184001 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:40:38.184368 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:40:38.192989 kernel: ACPI: button: Power Button [PWRF] Jan 13 20:40:38.193065 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 13 20:40:38.209891 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 20:40:38.216173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:40:38.224066 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:40:38.224177 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:40:38.224213 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:40:38.227071 systemd-resolved[1320]: Defaulting to hostname 'linux'. Jan 13 20:40:38.235994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:40:38.246132 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:40:38.297765 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 13 20:40:38.333888 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 13 20:40:38.340222 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 13 20:40:38.408298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:40:38.426089 systemd-networkd[1409]: lo: Link UP Jan 13 20:40:38.427513 systemd-networkd[1409]: lo: Gained carrier Jan 13 20:40:38.430886 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1347) Jan 13 20:40:38.431375 systemd-networkd[1409]: Enumeration completed Jan 13 20:40:38.432756 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:38.433047 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:40:38.435265 systemd[1]: Reached target network.target - Network. Jan 13 20:40:38.437152 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:40:38.441241 systemd-networkd[1409]: eth0: Link UP Jan 13 20:40:38.441255 systemd-networkd[1409]: eth0: Gained carrier Jan 13 20:40:38.441285 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:40:38.443087 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:40:38.456272 kernel: EDAC MC: Ver: 3.0.0 Jan 13 20:40:38.456975 systemd-networkd[1409]: eth0: DHCPv4 address 10.128.0.39/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 20:40:38.524541 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:40:38.535015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 20:40:38.538950 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:40:38.551337 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:40:38.556375 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:40:38.571920 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:40:38.584815 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:40:38.613209 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:40:38.613613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:40:38.622191 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:40:38.633027 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:40:38.643408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:40:38.655848 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:40:38.666180 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:40:38.677099 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:40:38.688310 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:40:38.698240 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:40:38.709042 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:40:38.720040 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:40:38.720098 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:40:38.729065 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:40:38.740461 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:40:38.751801 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:40:38.769831 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:40:38.781247 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:40:38.792370 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:40:38.802891 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:40:38.813033 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:40:38.822140 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:40:38.822201 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:40:38.834038 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:40:38.845846 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:40:38.860975 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:40:38.888107 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:40:38.909892 jq[1446]: false Jan 13 20:40:38.916076 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:40:38.925997 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:40:38.933685 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:40:38.955676 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 20:40:38.958026 coreos-metadata[1444]: Jan 13 20:40:38.957 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 13 20:40:38.963340 coreos-metadata[1444]: Jan 13 20:40:38.963 INFO Fetch successful Jan 13 20:40:38.965939 coreos-metadata[1444]: Jan 13 20:40:38.963 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 13 20:40:38.965939 coreos-metadata[1444]: Jan 13 20:40:38.964 INFO Fetch successful Jan 13 20:40:38.965939 coreos-metadata[1444]: Jan 13 20:40:38.964 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 13 20:40:38.965939 coreos-metadata[1444]: Jan 13 20:40:38.965 INFO Fetch successful Jan 13 20:40:38.965939 coreos-metadata[1444]: Jan 13 20:40:38.965 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 13 20:40:38.974354 coreos-metadata[1444]: Jan 13 20:40:38.971 INFO Fetch successful Jan 13 20:40:38.974168 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:40:38.992903 extend-filesystems[1449]: Found loop4 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found loop5 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found loop6 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found loop7 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found sda Jan 13 20:40:38.992903 extend-filesystems[1449]: Found sda1 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found sda2 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found sda3 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found usr Jan 13 20:40:38.992903 extend-filesystems[1449]: Found sda4 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found sda6 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found sda7 Jan 13 20:40:38.992903 extend-filesystems[1449]: Found sda9 Jan 13 20:40:38.992903 extend-filesystems[1449]: Checking size of /dev/sda9 Jan 13 20:40:39.094125 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 13 20:40:39.094312 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 13 20:40:39.094366 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (1378) Jan 13 20:40:38.993843 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:52 UTC 2025 (1): Starting Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: ---------------------------------------------------- Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: corporation. Support and training for ntp-4 are Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: available at https://www.nwtime.org/support Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: ---------------------------------------------------- Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: proto: precision = 0.105 usec (-23) Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: basedate set to 2025-01-01 Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: gps base set to 2025-01-05 (week 2348) Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: Listen normally on 3 eth0 10.128.0.39:123 Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: Listen normally on 4 lo [::1]:123 Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: bind(21) AF_INET6 fe80::4001:aff:fe80:27%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:27%2#123 Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: failed to init interface for address fe80::4001:aff:fe80:27%2 Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: Listening on routing socket on fd #21 for interface updates Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:40:39.094537 ntpd[1452]: 13 Jan 20:40:39 ntpd[1452]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:40:39.008104 dbus-daemon[1445]: [system] SELinux support is enabled Jan 13 20:40:39.098564 extend-filesystems[1449]: Resized partition /dev/sda9 Jan 13 20:40:39.094096 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:40:39.013836 dbus-daemon[1445]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1409 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 20:40:39.117366 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:40:39.117366 extend-filesystems[1466]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:40:39.117366 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 20:40:39.117366 extend-filesystems[1466]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 13 20:40:39.104592 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 13 20:40:39.019072 ntpd[1452]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 18:25:52 UTC 2025 (1): Starting Jan 13 20:40:39.118221 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Jan 13 20:40:39.105417 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:40:39.019108 ntpd[1452]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 20:40:39.114133 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:40:39.019124 ntpd[1452]: ---------------------------------------------------- Jan 13 20:40:39.019137 ntpd[1452]: ntp-4 is maintained by Network Time Foundation, Jan 13 20:40:39.019278 ntpd[1452]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 20:40:39.019294 ntpd[1452]: corporation. Support and training for ntp-4 are Jan 13 20:40:39.019308 ntpd[1452]: available at https://www.nwtime.org/support Jan 13 20:40:39.019322 ntpd[1452]: ---------------------------------------------------- Jan 13 20:40:39.023691 ntpd[1452]: proto: precision = 0.105 usec (-23) Jan 13 20:40:39.025134 ntpd[1452]: basedate set to 2025-01-01 Jan 13 20:40:39.025157 ntpd[1452]: gps base set to 2025-01-05 (week 2348) Jan 13 20:40:39.028750 ntpd[1452]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 20:40:39.028811 ntpd[1452]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 20:40:39.029238 ntpd[1452]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 20:40:39.029294 ntpd[1452]: Listen normally on 3 eth0 10.128.0.39:123 Jan 13 20:40:39.029351 ntpd[1452]: Listen normally on 4 lo [::1]:123 Jan 13 20:40:39.029411 ntpd[1452]: bind(21) AF_INET6 fe80::4001:aff:fe80:27%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:40:39.029438 ntpd[1452]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:27%2#123 Jan 13 20:40:39.029459 ntpd[1452]: failed to init interface for address fe80::4001:aff:fe80:27%2 Jan 13 20:40:39.029504 ntpd[1452]: Listening on routing socket on fd #21 for interface updates Jan 13 20:40:39.030926 ntpd[1452]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:40:39.030963 ntpd[1452]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 20:40:39.172016 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:40:39.185695 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:40:39.214485 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:40:39.214785 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:40:39.215342 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:40:39.215934 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:40:39.226806 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:40:39.227088 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:40:39.234920 update_engine[1473]: I20250113 20:40:39.233590 1473 main.cc:92] Flatcar Update Engine starting Jan 13 20:40:39.235308 jq[1477]: true Jan 13 20:40:39.237489 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:40:39.238160 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:40:39.243199 update_engine[1473]: I20250113 20:40:39.243135 1473 update_check_scheduler.cc:74] Next update check in 2m27s Jan 13 20:40:39.247368 systemd-logind[1472]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 20:40:39.247798 systemd-logind[1472]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 13 20:40:39.247832 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 20:40:39.251982 systemd-logind[1472]: New seat seat0. Jan 13 20:40:39.261993 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:40:39.294487 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:40:39.301400 jq[1480]: true Jan 13 20:40:39.334307 dbus-daemon[1445]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 20:40:39.340951 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:40:39.377427 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:40:39.394376 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:40:39.394695 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:40:39.395195 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:40:39.421972 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 20:40:39.432043 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:40:39.432309 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:40:39.453241 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:40:39.481755 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:40:39.487457 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:40:39.509298 systemd[1]: Starting sshkeys.service... Jan 13 20:40:39.598029 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:40:39.617333 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:40:39.739970 dbus-daemon[1445]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 20:40:39.740176 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 20:40:39.740956 dbus-daemon[1445]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1505 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 20:40:39.767562 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 20:40:39.768116 coreos-metadata[1517]: Jan 13 20:40:39.764 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 13 20:40:39.770243 coreos-metadata[1517]: Jan 13 20:40:39.770 INFO Fetch failed with 404: resource not found Jan 13 20:40:39.770243 coreos-metadata[1517]: Jan 13 20:40:39.770 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 13 20:40:39.771977 coreos-metadata[1517]: Jan 13 20:40:39.771 INFO Fetch successful Jan 13 20:40:39.771977 coreos-metadata[1517]: Jan 13 20:40:39.771 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 13 20:40:39.773525 coreos-metadata[1517]: Jan 13 20:40:39.773 INFO Fetch failed with 404: resource not found Jan 13 20:40:39.773525 coreos-metadata[1517]: Jan 13 20:40:39.773 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 13 20:40:39.774399 coreos-metadata[1517]: Jan 13 20:40:39.774 INFO Fetch failed with 404: resource not found Jan 13 20:40:39.774586 coreos-metadata[1517]: Jan 13 20:40:39.774 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 13 20:40:39.776445 coreos-metadata[1517]: Jan 13 20:40:39.776 INFO Fetch successful Jan 13 20:40:39.782148 unknown[1517]: wrote ssh authorized keys file for user: core Jan 13 20:40:39.804483 locksmithd[1510]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:40:39.851501 update-ssh-keys[1526]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:40:39.852629 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:40:39.873633 systemd[1]: Finished sshkeys.service. Jan 13 20:40:39.876150 polkitd[1521]: Started polkitd version 121 Jan 13 20:40:39.897180 polkitd[1521]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 20:40:39.897792 polkitd[1521]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 20:40:39.899179 polkitd[1521]: Finished loading, compiling and executing 2 rules Jan 13 20:40:39.899961 dbus-daemon[1445]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 20:40:39.900175 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 20:40:39.902016 polkitd[1521]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 20:40:39.918511 containerd[1481]: time="2025-01-13T20:40:39.918411950Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:40:39.934634 systemd-hostnamed[1505]: Hostname set to (transient) Jan 13 20:40:39.937452 systemd-resolved[1320]: System hostname changed to 'ci-4186-1-0-cce1388fee6bd9e1c68c.c.flatcar-212911.internal'. Jan 13 20:40:39.975887 containerd[1481]: time="2025-01-13T20:40:39.975174387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:39.977878 containerd[1481]: time="2025-01-13T20:40:39.977787343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:39.977878 containerd[1481]: time="2025-01-13T20:40:39.977844172Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:40:39.978023 containerd[1481]: time="2025-01-13T20:40:39.977897647Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:40:39.978172 containerd[1481]: time="2025-01-13T20:40:39.978145785Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:40:39.978225 containerd[1481]: time="2025-01-13T20:40:39.978182579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:39.978312 containerd[1481]: time="2025-01-13T20:40:39.978283977Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:39.978361 containerd[1481]: time="2025-01-13T20:40:39.978315995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.978572034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.978601392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.978623992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.978640518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.978745313Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.979050257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.979233183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.979253641Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.979356613Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:40:39.979660 containerd[1481]: time="2025-01-13T20:40:39.979411526Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:40:39.985502 containerd[1481]: time="2025-01-13T20:40:39.985460943Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:40:39.986364 containerd[1481]: time="2025-01-13T20:40:39.985723400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:40:39.986364 containerd[1481]: time="2025-01-13T20:40:39.985847491Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:40:39.986364 containerd[1481]: time="2025-01-13T20:40:39.985901863Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:40:39.986364 containerd[1481]: time="2025-01-13T20:40:39.985925379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:40:39.986364 containerd[1481]: time="2025-01-13T20:40:39.986140390Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:40:39.989593 containerd[1481]: time="2025-01-13T20:40:39.989554926Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:40:39.989996 containerd[1481]: time="2025-01-13T20:40:39.989971600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:40:39.990128 containerd[1481]: time="2025-01-13T20:40:39.990110645Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:40:39.990236 containerd[1481]: time="2025-01-13T20:40:39.990220124Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:40:39.990383 containerd[1481]: time="2025-01-13T20:40:39.990322279Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:40:39.990383 containerd[1481]: time="2025-01-13T20:40:39.990348463Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:40:39.990566 containerd[1481]: time="2025-01-13T20:40:39.990369746Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:40:39.990566 containerd[1481]: time="2025-01-13T20:40:39.990519764Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:40:39.990833 containerd[1481]: time="2025-01-13T20:40:39.990547195Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:40:39.990833 containerd[1481]: time="2025-01-13T20:40:39.990697102Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:40:39.990833 containerd[1481]: time="2025-01-13T20:40:39.990718433Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:40:39.990833 containerd[1481]: time="2025-01-13T20:40:39.990757079Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:40:39.990833 containerd[1481]: time="2025-01-13T20:40:39.990788900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.990833 containerd[1481]: time="2025-01-13T20:40:39.990811611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991344 containerd[1481]: time="2025-01-13T20:40:39.991121556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991344 containerd[1481]: time="2025-01-13T20:40:39.991150195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991344 containerd[1481]: time="2025-01-13T20:40:39.991188220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991344 containerd[1481]: time="2025-01-13T20:40:39.991211037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991344 containerd[1481]: time="2025-01-13T20:40:39.991230540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991344 containerd[1481]: time="2025-01-13T20:40:39.991271844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991344 containerd[1481]: time="2025-01-13T20:40:39.991294757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991880 containerd[1481]: time="2025-01-13T20:40:39.991326616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991880 containerd[1481]: time="2025-01-13T20:40:39.991663870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991880 containerd[1481]: time="2025-01-13T20:40:39.991686107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991880 containerd[1481]: time="2025-01-13T20:40:39.991727164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991880 containerd[1481]: time="2025-01-13T20:40:39.991752201Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:40:39.991880 containerd[1481]: time="2025-01-13T20:40:39.991802479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.991880 containerd[1481]: time="2025-01-13T20:40:39.991827026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.992512 containerd[1481]: time="2025-01-13T20:40:39.991845214Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:40:39.992512 containerd[1481]: time="2025-01-13T20:40:39.992295163Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:40:39.992512 containerd[1481]: time="2025-01-13T20:40:39.992325048Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:40:39.992512 containerd[1481]: time="2025-01-13T20:40:39.992444636Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:40:39.992512 containerd[1481]: time="2025-01-13T20:40:39.992467173Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:40:39.992512 containerd[1481]: time="2025-01-13T20:40:39.992483502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.993015 containerd[1481]: time="2025-01-13T20:40:39.992786583Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:40:39.993015 containerd[1481]: time="2025-01-13T20:40:39.992810434Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:40:39.993015 containerd[1481]: time="2025-01-13T20:40:39.992828218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:40:39.994016 containerd[1481]: time="2025-01-13T20:40:39.993670201Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:40:39.994016 containerd[1481]: time="2025-01-13T20:40:39.993772386Z" level=info msg="Connect containerd service" Jan 13 20:40:39.994016 containerd[1481]: time="2025-01-13T20:40:39.993841068Z" level=info msg="using legacy CRI server" Jan 13 20:40:39.994016 containerd[1481]: time="2025-01-13T20:40:39.993880622Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:40:39.994964 containerd[1481]: time="2025-01-13T20:40:39.994581013Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:40:39.997826 containerd[1481]: time="2025-01-13T20:40:39.997513035Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:40:39.999114 containerd[1481]: time="2025-01-13T20:40:39.999068293Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:40:39.999281 containerd[1481]: time="2025-01-13T20:40:39.999260659Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:40:39.999846 containerd[1481]: time="2025-01-13T20:40:39.999549863Z" level=info msg="Start subscribing containerd event" Jan 13 20:40:39.999940 containerd[1481]: time="2025-01-13T20:40:39.999891048Z" level=info msg="Start recovering state" Jan 13 20:40:40.001883 containerd[1481]: time="2025-01-13T20:40:39.999987824Z" level=info msg="Start event monitor" Jan 13 20:40:40.001883 containerd[1481]: time="2025-01-13T20:40:40.000010929Z" level=info msg="Start snapshots syncer" Jan 13 20:40:40.001883 containerd[1481]: time="2025-01-13T20:40:40.000025926Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:40:40.001883 containerd[1481]: time="2025-01-13T20:40:40.000093745Z" level=info msg="Start streaming server" Jan 13 20:40:40.001883 containerd[1481]: time="2025-01-13T20:40:40.001192264Z" level=info msg="containerd successfully booted in 0.084623s" Jan 13 20:40:40.001043 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:40:40.019816 ntpd[1452]: bind(24) AF_INET6 fe80::4001:aff:fe80:27%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:40:40.019920 ntpd[1452]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:27%2#123 Jan 13 20:40:40.020328 ntpd[1452]: 13 Jan 20:40:40 ntpd[1452]: bind(24) AF_INET6 fe80::4001:aff:fe80:27%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 20:40:40.020328 ntpd[1452]: 13 Jan 20:40:40 ntpd[1452]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:27%2#123 Jan 13 20:40:40.020328 ntpd[1452]: 13 Jan 20:40:40 ntpd[1452]: failed to init interface for address fe80::4001:aff:fe80:27%2 Jan 13 20:40:40.019945 ntpd[1452]: failed to init interface for address fe80::4001:aff:fe80:27%2 Jan 13 20:40:40.034195 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:40:40.063051 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:40:40.081343 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:40:40.111831 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:40:40.112137 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:40:40.129602 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:40:40.140128 systemd-networkd[1409]: eth0: Gained IPv6LL Jan 13 20:40:40.146073 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:40:40.158523 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:40:40.173178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:40.192301 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:40:40.207513 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 13 20:40:40.220267 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:40:40.226093 init.sh[1558]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 13 20:40:40.227533 init.sh[1558]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 13 20:40:40.227533 init.sh[1558]: + /usr/bin/google_instance_setup Jan 13 20:40:40.240774 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:40:40.261398 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:40:40.276363 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 20:40:40.286556 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:40:40.728802 instance-setup[1561]: INFO Running google_set_multiqueue. Jan 13 20:40:40.750192 instance-setup[1561]: INFO Set channels for eth0 to 2. Jan 13 20:40:40.755542 instance-setup[1561]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 13 20:40:40.758047 instance-setup[1561]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 13 20:40:40.758118 instance-setup[1561]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 13 20:40:40.759723 instance-setup[1561]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 13 20:40:40.760397 instance-setup[1561]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 13 20:40:40.762657 instance-setup[1561]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 13 20:40:40.762713 instance-setup[1561]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 13 20:40:40.764343 instance-setup[1561]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 13 20:40:40.772250 instance-setup[1561]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 20:40:40.776016 instance-setup[1561]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 20:40:40.777889 instance-setup[1561]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 13 20:40:40.777951 instance-setup[1561]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 13 20:40:40.799634 init.sh[1558]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 13 20:40:40.956179 startup-script[1601]: INFO Starting startup scripts. Jan 13 20:40:40.962166 startup-script[1601]: INFO No startup scripts found in metadata. Jan 13 20:40:40.962244 startup-script[1601]: INFO Finished running startup scripts. Jan 13 20:40:40.984983 init.sh[1558]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 13 20:40:40.984983 init.sh[1558]: + daemon_pids=() Jan 13 20:40:40.984983 init.sh[1558]: + for d in accounts clock_skew network Jan 13 20:40:40.984983 init.sh[1558]: + daemon_pids+=($!) Jan 13 20:40:40.984983 init.sh[1558]: + for d in accounts clock_skew network Jan 13 20:40:40.984983 init.sh[1558]: + daemon_pids+=($!) Jan 13 20:40:40.984983 init.sh[1558]: + for d in accounts clock_skew network Jan 13 20:40:40.985336 init.sh[1558]: + daemon_pids+=($!) Jan 13 20:40:40.985336 init.sh[1558]: + NOTIFY_SOCKET=/run/systemd/notify Jan 13 20:40:40.985336 init.sh[1558]: + /usr/bin/systemd-notify --ready Jan 13 20:40:40.986613 init.sh[1604]: + /usr/bin/google_accounts_daemon Jan 13 20:40:40.988446 init.sh[1605]: + /usr/bin/google_clock_skew_daemon Jan 13 20:40:40.988998 init.sh[1606]: + /usr/bin/google_network_daemon Jan 13 20:40:41.000701 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 13 20:40:41.019061 init.sh[1558]: + wait -n 1604 1605 1606 Jan 13 20:40:41.300791 groupadd[1610]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 13 20:40:41.308634 groupadd[1610]: group added to /etc/gshadow: name=google-sudoers Jan 13 20:40:41.381541 google-clock-skew[1605]: INFO Starting Google Clock Skew daemon. Jan 13 20:40:41.385870 groupadd[1610]: new group: name=google-sudoers, GID=1000 Jan 13 20:40:41.392690 google-clock-skew[1605]: INFO Clock drift token has changed: 0. Jan 13 20:40:41.412098 google-networking[1606]: INFO Starting Google Networking daemon. Jan 13 20:40:41.431955 google-accounts[1604]: INFO Starting Google Accounts daemon. Jan 13 20:40:41.446337 google-accounts[1604]: WARNING OS Login not installed. Jan 13 20:40:41.448312 google-accounts[1604]: INFO Creating a new user account for 0. Jan 13 20:40:41.453698 init.sh[1624]: useradd: invalid user name '0': use --badname to ignore Jan 13 20:40:41.453028 google-accounts[1604]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 13 20:40:42.000460 systemd-resolved[1320]: Clock change detected. Flushing caches. Jan 13 20:40:42.000500 google-clock-skew[1605]: INFO Synced system time with hardware clock. Jan 13 20:40:42.120160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:42.131978 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:40:42.141545 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:40:42.142688 systemd[1]: Startup finished in 1.002s (kernel) + 8.394s (initrd) + 8.602s (userspace) = 18.000s. Jan 13 20:40:42.161396 agetty[1570]: failed to open credentials directory Jan 13 20:40:42.162498 agetty[1569]: failed to open credentials directory Jan 13 20:40:42.904892 kubelet[1631]: E0113 20:40:42.904817 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:40:42.906746 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:40:42.907043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:40:42.907589 systemd[1]: kubelet.service: Consumed 1.192s CPU time. Jan 13 20:40:43.473920 ntpd[1452]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:27%2]:123 Jan 13 20:40:43.474423 ntpd[1452]: 13 Jan 20:40:43 ntpd[1452]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:27%2]:123 Jan 13 20:40:47.682322 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:40:47.693531 systemd[1]: Started sshd@0-10.128.0.39:22-139.178.68.195:56948.service - OpenSSH per-connection server daemon (139.178.68.195:56948). Jan 13 20:40:47.999733 sshd[1644]: Accepted publickey for core from 139.178.68.195 port 56948 ssh2: RSA SHA256:O3n3XrtwSVUJL4vbAnrZLm217nLYEk3kKlGrdNO40l8 Jan 13 20:40:48.001824 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:48.016618 systemd-logind[1472]: New session 1 of user core. Jan 13 20:40:48.018207 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:40:48.024354 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:40:48.053375 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:40:48.061511 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:40:48.082735 (systemd)[1648]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:40:48.219453 systemd[1648]: Queued start job for default target default.target. Jan 13 20:40:48.231685 systemd[1648]: Created slice app.slice - User Application Slice. Jan 13 20:40:48.231754 systemd[1648]: Reached target paths.target - Paths. Jan 13 20:40:48.231783 systemd[1648]: Reached target timers.target - Timers. Jan 13 20:40:48.233715 systemd[1648]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:40:48.249728 systemd[1648]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:40:48.249920 systemd[1648]: Reached target sockets.target - Sockets. Jan 13 20:40:48.249947 systemd[1648]: Reached target basic.target - Basic System. Jan 13 20:40:48.250650 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:40:48.250698 systemd[1648]: Reached target default.target - Main User Target. Jan 13 20:40:48.250770 systemd[1648]: Startup finished in 159ms. Jan 13 20:40:48.262421 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:40:48.487493 systemd[1]: Started sshd@1-10.128.0.39:22-139.178.68.195:56950.service - OpenSSH per-connection server daemon (139.178.68.195:56950). Jan 13 20:40:48.793156 sshd[1660]: Accepted publickey for core from 139.178.68.195 port 56950 ssh2: RSA SHA256:O3n3XrtwSVUJL4vbAnrZLm217nLYEk3kKlGrdNO40l8 Jan 13 20:40:48.794879 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:48.801833 systemd-logind[1472]: New session 2 of user core. Jan 13 20:40:48.816226 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:40:49.010561 sshd[1662]: Connection closed by 139.178.68.195 port 56950 Jan 13 20:40:49.011408 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:49.015817 systemd[1]: sshd@1-10.128.0.39:22-139.178.68.195:56950.service: Deactivated successfully. Jan 13 20:40:49.018228 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:40:49.020011 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:40:49.021383 systemd-logind[1472]: Removed session 2. Jan 13 20:40:49.065348 systemd[1]: Started sshd@2-10.128.0.39:22-139.178.68.195:56952.service - OpenSSH per-connection server daemon (139.178.68.195:56952). Jan 13 20:40:49.363816 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 56952 ssh2: RSA SHA256:O3n3XrtwSVUJL4vbAnrZLm217nLYEk3kKlGrdNO40l8 Jan 13 20:40:49.365588 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:49.372523 systemd-logind[1472]: New session 3 of user core. Jan 13 20:40:49.381221 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:40:49.573363 sshd[1669]: Connection closed by 139.178.68.195 port 56952 Jan 13 20:40:49.574206 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:49.578451 systemd[1]: sshd@2-10.128.0.39:22-139.178.68.195:56952.service: Deactivated successfully. Jan 13 20:40:49.580908 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:40:49.582708 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:40:49.584123 systemd-logind[1472]: Removed session 3. Jan 13 20:40:49.631331 systemd[1]: Started sshd@3-10.128.0.39:22-139.178.68.195:56962.service - OpenSSH per-connection server daemon (139.178.68.195:56962). Jan 13 20:40:49.925451 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 56962 ssh2: RSA SHA256:O3n3XrtwSVUJL4vbAnrZLm217nLYEk3kKlGrdNO40l8 Jan 13 20:40:49.927187 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:49.933387 systemd-logind[1472]: New session 4 of user core. Jan 13 20:40:49.949230 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:40:50.138987 sshd[1676]: Connection closed by 139.178.68.195 port 56962 Jan 13 20:40:50.139896 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:50.145197 systemd[1]: sshd@3-10.128.0.39:22-139.178.68.195:56962.service: Deactivated successfully. Jan 13 20:40:50.147398 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:40:50.148334 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:40:50.149763 systemd-logind[1472]: Removed session 4. Jan 13 20:40:50.194352 systemd[1]: Started sshd@4-10.128.0.39:22-139.178.68.195:56964.service - OpenSSH per-connection server daemon (139.178.68.195:56964). Jan 13 20:40:50.491993 sshd[1681]: Accepted publickey for core from 139.178.68.195 port 56964 ssh2: RSA SHA256:O3n3XrtwSVUJL4vbAnrZLm217nLYEk3kKlGrdNO40l8 Jan 13 20:40:50.493669 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:50.500231 systemd-logind[1472]: New session 5 of user core. Jan 13 20:40:50.507189 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:40:50.684561 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:40:50.685100 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:40:50.699326 sudo[1684]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:50.741895 sshd[1683]: Connection closed by 139.178.68.195 port 56964 Jan 13 20:40:50.743166 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:50.748116 systemd[1]: sshd@4-10.128.0.39:22-139.178.68.195:56964.service: Deactivated successfully. Jan 13 20:40:50.750335 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:40:50.752409 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:40:50.753884 systemd-logind[1472]: Removed session 5. Jan 13 20:40:50.798361 systemd[1]: Started sshd@5-10.128.0.39:22-139.178.68.195:56980.service - OpenSSH per-connection server daemon (139.178.68.195:56980). Jan 13 20:40:51.101773 sshd[1689]: Accepted publickey for core from 139.178.68.195 port 56980 ssh2: RSA SHA256:O3n3XrtwSVUJL4vbAnrZLm217nLYEk3kKlGrdNO40l8 Jan 13 20:40:51.103615 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:51.110037 systemd-logind[1472]: New session 6 of user core. Jan 13 20:40:51.124257 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:40:51.282460 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:40:51.283003 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:40:51.288007 sudo[1693]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:51.301567 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:40:51.302072 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:40:51.318418 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:40:51.357266 augenrules[1715]: No rules Jan 13 20:40:51.358561 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:40:51.358814 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:40:51.361134 sudo[1692]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:51.404402 sshd[1691]: Connection closed by 139.178.68.195 port 56980 Jan 13 20:40:51.405255 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:51.409405 systemd[1]: sshd@5-10.128.0.39:22-139.178.68.195:56980.service: Deactivated successfully. Jan 13 20:40:51.411741 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:40:51.413568 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:40:51.415008 systemd-logind[1472]: Removed session 6. Jan 13 20:40:51.461695 systemd[1]: Started sshd@6-10.128.0.39:22-139.178.68.195:56992.service - OpenSSH per-connection server daemon (139.178.68.195:56992). Jan 13 20:40:51.762818 sshd[1723]: Accepted publickey for core from 139.178.68.195 port 56992 ssh2: RSA SHA256:O3n3XrtwSVUJL4vbAnrZLm217nLYEk3kKlGrdNO40l8 Jan 13 20:40:51.764700 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:51.771156 systemd-logind[1472]: New session 7 of user core. Jan 13 20:40:51.781227 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:40:51.946686 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:40:51.947259 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:40:52.838133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:52.838577 systemd[1]: kubelet.service: Consumed 1.192s CPU time. Jan 13 20:40:52.849509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:52.905632 systemd[1]: Reloading requested from client PID 1758 ('systemctl') (unit session-7.scope)... Jan 13 20:40:52.905658 systemd[1]: Reloading... Jan 13 20:40:53.069984 zram_generator::config[1798]: No configuration found. Jan 13 20:40:53.214487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:40:53.317796 systemd[1]: Reloading finished in 411 ms. Jan 13 20:40:53.376201 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:40:53.376351 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:40:53.376669 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:53.378910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:40:53.675880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:40:53.687555 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:40:53.742139 kubelet[1847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:40:53.742623 kubelet[1847]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:40:53.742623 kubelet[1847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:40:53.744131 kubelet[1847]: I0113 20:40:53.744042 1847 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:40:55.074019 kubelet[1847]: I0113 20:40:55.073301 1847 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:40:55.074019 kubelet[1847]: I0113 20:40:55.073347 1847 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:40:55.074705 kubelet[1847]: I0113 20:40:55.074679 1847 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:40:55.110988 kubelet[1847]: I0113 20:40:55.110274 1847 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:40:55.123855 kubelet[1847]: E0113 20:40:55.123815 1847 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:40:55.124044 kubelet[1847]: I0113 20:40:55.123982 1847 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:40:55.129703 kubelet[1847]: I0113 20:40:55.129553 1847 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:40:55.131510 kubelet[1847]: I0113 20:40:55.131458 1847 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:40:55.131796 kubelet[1847]: I0113 20:40:55.131725 1847 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:40:55.132055 kubelet[1847]: I0113 20:40:55.131800 1847 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.128.0.39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:40:55.132265 kubelet[1847]: I0113 20:40:55.132063 1847 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:40:55.132265 kubelet[1847]: I0113 20:40:55.132081 1847 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:40:55.132265 kubelet[1847]: I0113 20:40:55.132239 1847 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:40:55.134634 kubelet[1847]: I0113 20:40:55.134208 1847 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:40:55.134634 kubelet[1847]: I0113 20:40:55.134248 1847 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:40:55.134634 kubelet[1847]: I0113 20:40:55.134295 1847 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:40:55.134634 kubelet[1847]: I0113 20:40:55.134318 1847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:40:55.136309 kubelet[1847]: E0113 20:40:55.135882 1847 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:55.136309 kubelet[1847]: E0113 20:40:55.135984 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:55.141690 kubelet[1847]: I0113 20:40:55.141589 1847 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:40:55.144708 kubelet[1847]: I0113 20:40:55.144552 1847 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:40:55.145797 kubelet[1847]: W0113 20:40:55.145746 1847 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:40:55.146807 kubelet[1847]: I0113 20:40:55.146603 1847 server.go:1269] "Started kubelet" Jan 13 20:40:55.148977 kubelet[1847]: I0113 20:40:55.148242 1847 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:40:55.148977 kubelet[1847]: I0113 20:40:55.148378 1847 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:40:55.148977 kubelet[1847]: I0113 20:40:55.148850 1847 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:40:55.149651 kubelet[1847]: I0113 20:40:55.149610 1847 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:40:55.151994 kubelet[1847]: I0113 20:40:55.151929 1847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:40:55.160986 kubelet[1847]: I0113 20:40:55.159303 1847 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:40:55.160986 kubelet[1847]: E0113 20:40:55.159650 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.128.0.39\" not found" Jan 13 20:40:55.160986 kubelet[1847]: I0113 20:40:55.160061 1847 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:40:55.160986 kubelet[1847]: I0113 20:40:55.160172 1847 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:40:55.160986 kubelet[1847]: I0113 20:40:55.160283 1847 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:40:55.171238 kubelet[1847]: I0113 20:40:55.171008 1847 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:40:55.171238 kubelet[1847]: I0113 20:40:55.171155 1847 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:40:55.173111 kubelet[1847]: E0113 20:40:55.172064 1847 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.39\" not found" node="10.128.0.39" Jan 13 20:40:55.177243 kubelet[1847]: E0113 20:40:55.177158 1847 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:40:55.178514 kubelet[1847]: I0113 20:40:55.178487 1847 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:40:55.207757 kubelet[1847]: I0113 20:40:55.206805 1847 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:40:55.207757 kubelet[1847]: I0113 20:40:55.206844 1847 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:40:55.207757 kubelet[1847]: I0113 20:40:55.206869 1847 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:40:55.212271 kubelet[1847]: I0113 20:40:55.212146 1847 policy_none.go:49] "None policy: Start" Jan 13 20:40:55.216037 kubelet[1847]: I0113 20:40:55.215311 1847 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:40:55.216037 kubelet[1847]: I0113 20:40:55.215356 1847 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:40:55.234282 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:40:55.249847 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:40:55.256046 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:40:55.259974 kubelet[1847]: E0113 20:40:55.259921 1847 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.128.0.39\" not found" Jan 13 20:40:55.263543 kubelet[1847]: I0113 20:40:55.263519 1847 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:40:55.264322 kubelet[1847]: I0113 20:40:55.263915 1847 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:40:55.264322 kubelet[1847]: I0113 20:40:55.263934 1847 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:40:55.268661 kubelet[1847]: I0113 20:40:55.268644 1847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:40:55.274065 kubelet[1847]: E0113 20:40:55.273889 1847 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.39\" not found" Jan 13 20:40:55.279325 kubelet[1847]: I0113 20:40:55.279239 1847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:40:55.281216 kubelet[1847]: I0113 20:40:55.281177 1847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:40:55.281320 kubelet[1847]: I0113 20:40:55.281221 1847 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:40:55.281320 kubelet[1847]: I0113 20:40:55.281246 1847 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:40:55.281320 kubelet[1847]: E0113 20:40:55.281304 1847 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 13 20:40:55.366759 kubelet[1847]: I0113 20:40:55.365788 1847 kubelet_node_status.go:72] "Attempting to register node" node="10.128.0.39" Jan 13 20:40:55.374368 kubelet[1847]: I0113 20:40:55.374334 1847 kubelet_node_status.go:75] "Successfully registered node" node="10.128.0.39" Jan 13 20:40:55.386477 kubelet[1847]: I0113 20:40:55.386442 1847 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:40:55.386916 containerd[1481]: time="2025-01-13T20:40:55.386854782Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:40:55.387712 kubelet[1847]: I0113 20:40:55.387626 1847 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:40:55.722189 sudo[1726]: pam_unix(sudo:session): session closed for user root Jan 13 20:40:55.765403 sshd[1725]: Connection closed by 139.178.68.195 port 56992 Jan 13 20:40:55.766283 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:55.770770 systemd[1]: sshd@6-10.128.0.39:22-139.178.68.195:56992.service: Deactivated successfully. Jan 13 20:40:55.773713 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:40:55.776076 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:40:55.777655 systemd-logind[1472]: Removed session 7. Jan 13 20:40:56.079722 kubelet[1847]: I0113 20:40:56.079647 1847 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:40:56.080567 kubelet[1847]: W0113 20:40:56.080040 1847 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:40:56.080567 kubelet[1847]: W0113 20:40:56.080113 1847 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:40:56.080567 kubelet[1847]: W0113 20:40:56.080543 1847 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:40:56.136250 kubelet[1847]: E0113 20:40:56.136135 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:56.136250 kubelet[1847]: I0113 20:40:56.136180 1847 apiserver.go:52] "Watching apiserver" Jan 13 20:40:56.143267 kubelet[1847]: E0113 20:40:56.142224 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:40:56.156265 systemd[1]: Created slice kubepods-besteffort-pod9c249fcc_0413_4594_ad53_355fd7dd0193.slice - libcontainer container kubepods-besteffort-pod9c249fcc_0413_4594_ad53_355fd7dd0193.slice. Jan 13 20:40:56.161085 kubelet[1847]: I0113 20:40:56.161045 1847 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:40:56.170643 kubelet[1847]: I0113 20:40:56.169412 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-policysync\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.170643 kubelet[1847]: I0113 20:40:56.169495 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c249fcc-0413-4594-ad53-355fd7dd0193-tigera-ca-bundle\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.170643 kubelet[1847]: I0113 20:40:56.169539 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-bin-dir\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.170643 kubelet[1847]: I0113 20:40:56.169571 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-log-dir\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.170643 kubelet[1847]: I0113 20:40:56.169631 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e-varrun\") pod \"csi-node-driver-4fr6x\" (UID: \"6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e\") " pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:40:56.171103 kubelet[1847]: I0113 20:40:56.169662 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e-kubelet-dir\") pod \"csi-node-driver-4fr6x\" (UID: \"6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e\") " pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:40:56.171103 kubelet[1847]: I0113 20:40:56.169694 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e-socket-dir\") pod \"csi-node-driver-4fr6x\" (UID: \"6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e\") " pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:40:56.171103 kubelet[1847]: I0113 20:40:56.169729 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-xtables-lock\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.171103 kubelet[1847]: I0113 20:40:56.169765 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c249fcc-0413-4594-ad53-355fd7dd0193-node-certs\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.171103 kubelet[1847]: I0113 20:40:56.169800 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chhdz\" (UniqueName: \"kubernetes.io/projected/9c249fcc-0413-4594-ad53-355fd7dd0193-kube-api-access-chhdz\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.171355 kubelet[1847]: I0113 20:40:56.169832 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e-registration-dir\") pod \"csi-node-driver-4fr6x\" (UID: \"6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e\") " pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:40:56.171355 kubelet[1847]: I0113 20:40:56.169864 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ce1f833-a031-4e8c-a486-abb7720b6160-xtables-lock\") pod \"kube-proxy-q89lp\" (UID: \"3ce1f833-a031-4e8c-a486-abb7720b6160\") " pod="kube-system/kube-proxy-q89lp" Jan 13 20:40:56.171355 kubelet[1847]: I0113 20:40:56.169898 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-lib-modules\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.171355 kubelet[1847]: I0113 20:40:56.169935 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-net-dir\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.171355 kubelet[1847]: I0113 20:40:56.170005 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ce1f833-a031-4e8c-a486-abb7720b6160-kube-proxy\") pod \"kube-proxy-q89lp\" (UID: \"3ce1f833-a031-4e8c-a486-abb7720b6160\") " pod="kube-system/kube-proxy-q89lp" Jan 13 20:40:56.171567 kubelet[1847]: I0113 20:40:56.170041 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ce1f833-a031-4e8c-a486-abb7720b6160-lib-modules\") pod \"kube-proxy-q89lp\" (UID: \"3ce1f833-a031-4e8c-a486-abb7720b6160\") " pod="kube-system/kube-proxy-q89lp" Jan 13 20:40:56.171567 kubelet[1847]: I0113 20:40:56.170076 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdsl4\" (UniqueName: \"kubernetes.io/projected/3ce1f833-a031-4e8c-a486-abb7720b6160-kube-api-access-bdsl4\") pod \"kube-proxy-q89lp\" (UID: \"3ce1f833-a031-4e8c-a486-abb7720b6160\") " pod="kube-system/kube-proxy-q89lp" Jan 13 20:40:56.171567 kubelet[1847]: I0113 20:40:56.170119 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-var-run-calico\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.171567 kubelet[1847]: I0113 20:40:56.170154 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-flexvol-driver-host\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.171567 kubelet[1847]: I0113 20:40:56.170185 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cshjb\" (UniqueName: \"kubernetes.io/projected/6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e-kube-api-access-cshjb\") pod \"csi-node-driver-4fr6x\" (UID: \"6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e\") " pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:40:56.171796 kubelet[1847]: I0113 20:40:56.170227 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-var-lib-calico\") pod \"calico-node-mp9v9\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " pod="calico-system/calico-node-mp9v9" Jan 13 20:40:56.179869 systemd[1]: Created slice kubepods-besteffort-pod3ce1f833_a031_4e8c_a486_abb7720b6160.slice - libcontainer container kubepods-besteffort-pod3ce1f833_a031_4e8c_a486_abb7720b6160.slice. Jan 13 20:40:56.280988 kubelet[1847]: E0113 20:40:56.278748 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.280988 kubelet[1847]: W0113 20:40:56.278803 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.280988 kubelet[1847]: E0113 20:40:56.278859 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.286380 kubelet[1847]: E0113 20:40:56.286070 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.286380 kubelet[1847]: W0113 20:40:56.286109 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.286380 kubelet[1847]: E0113 20:40:56.286150 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.289219 kubelet[1847]: E0113 20:40:56.288868 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.289219 kubelet[1847]: W0113 20:40:56.288903 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.289219 kubelet[1847]: E0113 20:40:56.288940 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.291511 kubelet[1847]: E0113 20:40:56.290844 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.291511 kubelet[1847]: W0113 20:40:56.290870 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.291511 kubelet[1847]: E0113 20:40:56.290903 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.291511 kubelet[1847]: E0113 20:40:56.291486 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.291511 kubelet[1847]: W0113 20:40:56.291504 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.292161 kubelet[1847]: E0113 20:40:56.291902 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.292161 kubelet[1847]: W0113 20:40:56.291923 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.292161 kubelet[1847]: E0113 20:40:56.291992 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.292161 kubelet[1847]: E0113 20:40:56.292107 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.302717 kubelet[1847]: E0113 20:40:56.300024 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.302717 kubelet[1847]: W0113 20:40:56.300062 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.302717 kubelet[1847]: E0113 20:40:56.300100 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.303416 kubelet[1847]: E0113 20:40:56.303369 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.303416 kubelet[1847]: W0113 20:40:56.303414 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.303574 kubelet[1847]: E0113 20:40:56.303451 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.314627 kubelet[1847]: E0113 20:40:56.314588 1847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:40:56.315004 kubelet[1847]: W0113 20:40:56.314863 1847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:40:56.315004 kubelet[1847]: E0113 20:40:56.314916 1847 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:40:56.473997 containerd[1481]: time="2025-01-13T20:40:56.473458274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mp9v9,Uid:9c249fcc-0413-4594-ad53-355fd7dd0193,Namespace:calico-system,Attempt:0,}" Jan 13 20:40:56.488825 containerd[1481]: time="2025-01-13T20:40:56.488747320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q89lp,Uid:3ce1f833-a031-4e8c-a486-abb7720b6160,Namespace:kube-system,Attempt:0,}" Jan 13 20:40:56.955484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014712054.mount: Deactivated successfully. Jan 13 20:40:56.965583 containerd[1481]: time="2025-01-13T20:40:56.965486439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:40:56.967900 containerd[1481]: time="2025-01-13T20:40:56.967826459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:40:56.969105 containerd[1481]: time="2025-01-13T20:40:56.969047840Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 13 20:40:56.970505 containerd[1481]: time="2025-01-13T20:40:56.970447152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:40:56.971553 containerd[1481]: time="2025-01-13T20:40:56.971485533Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:40:56.975125 containerd[1481]: time="2025-01-13T20:40:56.975031108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:40:56.976684 containerd[1481]: time="2025-01-13T20:40:56.976244969Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 502.5638ms" Jan 13 20:40:56.981833 containerd[1481]: time="2025-01-13T20:40:56.981402293Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 492.446004ms" Jan 13 20:40:57.136579 kubelet[1847]: E0113 20:40:57.136521 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:57.196683 containerd[1481]: time="2025-01-13T20:40:57.196502748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:57.196683 containerd[1481]: time="2025-01-13T20:40:57.196588238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:57.196683 containerd[1481]: time="2025-01-13T20:40:57.196633845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:57.197003 containerd[1481]: time="2025-01-13T20:40:57.196785000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:57.201387 containerd[1481]: time="2025-01-13T20:40:57.196188597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:40:57.201539 containerd[1481]: time="2025-01-13T20:40:57.201372620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:40:57.201539 containerd[1481]: time="2025-01-13T20:40:57.201403514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:57.201756 containerd[1481]: time="2025-01-13T20:40:57.201677327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:40:57.332604 systemd[1]: run-containerd-runc-k8s.io-63d673090206887d9d9b57f33bbeec30b5091dfa2e44938afcea38665c837fda-runc.TLOQAz.mount: Deactivated successfully. Jan 13 20:40:57.343550 systemd[1]: Started cri-containerd-63d673090206887d9d9b57f33bbeec30b5091dfa2e44938afcea38665c837fda.scope - libcontainer container 63d673090206887d9d9b57f33bbeec30b5091dfa2e44938afcea38665c837fda. Jan 13 20:40:57.345905 systemd[1]: Started cri-containerd-a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6.scope - libcontainer container a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6. Jan 13 20:40:57.392827 containerd[1481]: time="2025-01-13T20:40:57.392313976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mp9v9,Uid:9c249fcc-0413-4594-ad53-355fd7dd0193,Namespace:calico-system,Attempt:0,} returns sandbox id \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\"" Jan 13 20:40:57.397981 containerd[1481]: time="2025-01-13T20:40:57.397918887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:40:57.402110 containerd[1481]: time="2025-01-13T20:40:57.402074915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q89lp,Uid:3ce1f833-a031-4e8c-a486-abb7720b6160,Namespace:kube-system,Attempt:0,} returns sandbox id \"63d673090206887d9d9b57f33bbeec30b5091dfa2e44938afcea38665c837fda\"" Jan 13 20:40:58.136968 kubelet[1847]: E0113 20:40:58.136870 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:58.263527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295911500.mount: Deactivated successfully. Jan 13 20:40:58.282857 kubelet[1847]: E0113 20:40:58.282376 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:40:58.398364 containerd[1481]: time="2025-01-13T20:40:58.398215401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:58.400052 containerd[1481]: time="2025-01-13T20:40:58.399990560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 13 20:40:58.401537 containerd[1481]: time="2025-01-13T20:40:58.401470674Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:58.404735 containerd[1481]: time="2025-01-13T20:40:58.404665608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:40:58.405749 containerd[1481]: time="2025-01-13T20:40:58.405701876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.007710971s" Jan 13 20:40:58.405845 containerd[1481]: time="2025-01-13T20:40:58.405755104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 13 20:40:58.408011 containerd[1481]: time="2025-01-13T20:40:58.407789491Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:40:58.409737 containerd[1481]: time="2025-01-13T20:40:58.409484951Z" level=info msg="CreateContainer within sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:40:58.430349 containerd[1481]: time="2025-01-13T20:40:58.430301443Z" level=info msg="CreateContainer within sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4\"" Jan 13 20:40:58.431631 containerd[1481]: time="2025-01-13T20:40:58.431571239Z" level=info msg="StartContainer for \"06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4\"" Jan 13 20:40:58.476786 systemd[1]: run-containerd-runc-k8s.io-06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4-runc.lPABqM.mount: Deactivated successfully. Jan 13 20:40:58.489427 systemd[1]: Started cri-containerd-06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4.scope - libcontainer container 06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4. Jan 13 20:40:58.534183 containerd[1481]: time="2025-01-13T20:40:58.533988825Z" level=info msg="StartContainer for \"06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4\" returns successfully" Jan 13 20:40:58.549717 systemd[1]: cri-containerd-06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4.scope: Deactivated successfully. Jan 13 20:40:58.587295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4-rootfs.mount: Deactivated successfully. Jan 13 20:40:58.642269 containerd[1481]: time="2025-01-13T20:40:58.642162836Z" level=info msg="shim disconnected" id=06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4 namespace=k8s.io Jan 13 20:40:58.642269 containerd[1481]: time="2025-01-13T20:40:58.642262042Z" level=warning msg="cleaning up after shim disconnected" id=06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4 namespace=k8s.io Jan 13 20:40:58.642649 containerd[1481]: time="2025-01-13T20:40:58.642297738Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:40:59.137921 kubelet[1847]: E0113 20:40:59.137862 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:40:59.701737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373464183.mount: Deactivated successfully. Jan 13 20:41:00.138319 kubelet[1847]: E0113 20:41:00.138254 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:00.283249 kubelet[1847]: E0113 20:41:00.282352 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:00.333786 containerd[1481]: time="2025-01-13T20:41:00.333725902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:00.335027 containerd[1481]: time="2025-01-13T20:41:00.334942742Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30232138" Jan 13 20:41:00.336733 containerd[1481]: time="2025-01-13T20:41:00.336668494Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:00.339591 containerd[1481]: time="2025-01-13T20:41:00.339545006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:00.342043 containerd[1481]: time="2025-01-13T20:41:00.341079449Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.933248393s" Jan 13 20:41:00.342043 containerd[1481]: time="2025-01-13T20:41:00.341138536Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 20:41:00.343174 containerd[1481]: time="2025-01-13T20:41:00.343119618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:41:00.345172 containerd[1481]: time="2025-01-13T20:41:00.345136157Z" level=info msg="CreateContainer within sandbox \"63d673090206887d9d9b57f33bbeec30b5091dfa2e44938afcea38665c837fda\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:41:00.385292 containerd[1481]: time="2025-01-13T20:41:00.385232345Z" level=info msg="CreateContainer within sandbox \"63d673090206887d9d9b57f33bbeec30b5091dfa2e44938afcea38665c837fda\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"151a3af60c128c8a2aba0deb2baf0dace39f2897b3cea42b04219ab314a40a19\"" Jan 13 20:41:00.386004 containerd[1481]: time="2025-01-13T20:41:00.385945857Z" level=info msg="StartContainer for \"151a3af60c128c8a2aba0deb2baf0dace39f2897b3cea42b04219ab314a40a19\"" Jan 13 20:41:00.425162 systemd[1]: Started cri-containerd-151a3af60c128c8a2aba0deb2baf0dace39f2897b3cea42b04219ab314a40a19.scope - libcontainer container 151a3af60c128c8a2aba0deb2baf0dace39f2897b3cea42b04219ab314a40a19. Jan 13 20:41:00.470628 containerd[1481]: time="2025-01-13T20:41:00.469897007Z" level=info msg="StartContainer for \"151a3af60c128c8a2aba0deb2baf0dace39f2897b3cea42b04219ab314a40a19\" returns successfully" Jan 13 20:41:01.139165 kubelet[1847]: E0113 20:41:01.139100 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:02.139767 kubelet[1847]: E0113 20:41:02.139690 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:02.282724 kubelet[1847]: E0113 20:41:02.282632 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:03.139974 kubelet[1847]: E0113 20:41:03.139913 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:04.140252 kubelet[1847]: E0113 20:41:04.140206 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:04.281985 kubelet[1847]: E0113 20:41:04.281678 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:04.483653 containerd[1481]: time="2025-01-13T20:41:04.483496188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:04.485426 containerd[1481]: time="2025-01-13T20:41:04.485356266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 13 20:41:04.486896 containerd[1481]: time="2025-01-13T20:41:04.486828786Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:04.489895 containerd[1481]: time="2025-01-13T20:41:04.489819446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:04.491463 containerd[1481]: time="2025-01-13T20:41:04.490684022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.147496428s" Jan 13 20:41:04.491463 containerd[1481]: time="2025-01-13T20:41:04.490736772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 13 20:41:04.493353 containerd[1481]: time="2025-01-13T20:41:04.493314685Z" level=info msg="CreateContainer within sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:41:04.511910 containerd[1481]: time="2025-01-13T20:41:04.511848690Z" level=info msg="CreateContainer within sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463\"" Jan 13 20:41:04.512716 containerd[1481]: time="2025-01-13T20:41:04.512661272Z" level=info msg="StartContainer for \"f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463\"" Jan 13 20:41:04.560212 systemd[1]: Started cri-containerd-f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463.scope - libcontainer container f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463. Jan 13 20:41:04.602738 containerd[1481]: time="2025-01-13T20:41:04.602629655Z" level=info msg="StartContainer for \"f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463\" returns successfully" Jan 13 20:41:05.140801 kubelet[1847]: E0113 20:41:05.140729 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:05.345267 kubelet[1847]: I0113 20:41:05.344681 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q89lp" podStartSLOduration=7.405469486 podStartE2EDuration="10.344645074s" podCreationTimestamp="2025-01-13 20:40:55 +0000 UTC" firstStartedPulling="2025-01-13 20:40:57.403646775 +0000 UTC m=+3.709369848" lastFinishedPulling="2025-01-13 20:41:00.342822347 +0000 UTC m=+6.648545436" observedRunningTime="2025-01-13 20:41:01.328092613 +0000 UTC m=+7.633815702" watchObservedRunningTime="2025-01-13 20:41:05.344645074 +0000 UTC m=+11.650368160" Jan 13 20:41:05.438975 systemd[1]: cri-containerd-f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463.scope: Deactivated successfully. Jan 13 20:41:05.469348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463-rootfs.mount: Deactivated successfully. Jan 13 20:41:05.477515 kubelet[1847]: I0113 20:41:05.477237 1847 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:41:06.141294 kubelet[1847]: E0113 20:41:06.141223 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:06.289687 systemd[1]: Created slice kubepods-besteffort-pod6981a5c8_5fcb_4dab_9a47_f02b70fe4b4e.slice - libcontainer container kubepods-besteffort-pod6981a5c8_5fcb_4dab_9a47_f02b70fe4b4e.slice. Jan 13 20:41:06.293221 containerd[1481]: time="2025-01-13T20:41:06.293175742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:0,}" Jan 13 20:41:06.616930 containerd[1481]: time="2025-01-13T20:41:06.616823441Z" level=info msg="shim disconnected" id=f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463 namespace=k8s.io Jan 13 20:41:06.616930 containerd[1481]: time="2025-01-13T20:41:06.616923492Z" level=warning msg="cleaning up after shim disconnected" id=f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463 namespace=k8s.io Jan 13 20:41:06.616930 containerd[1481]: time="2025-01-13T20:41:06.616938597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:06.703336 containerd[1481]: time="2025-01-13T20:41:06.703214532Z" level=error msg="Failed to destroy network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:06.704191 containerd[1481]: time="2025-01-13T20:41:06.703899019Z" level=error msg="encountered an error cleaning up failed sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:06.704191 containerd[1481]: time="2025-01-13T20:41:06.704067689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:06.707007 kubelet[1847]: E0113 20:41:06.705212 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:06.707007 kubelet[1847]: E0113 20:41:06.705434 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:06.707007 kubelet[1847]: E0113 20:41:06.705474 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:06.707292 kubelet[1847]: E0113 20:41:06.705544 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:06.707071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9-shm.mount: Deactivated successfully. Jan 13 20:41:07.141719 kubelet[1847]: E0113 20:41:07.141647 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:07.336716 kubelet[1847]: I0113 20:41:07.336675 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9" Jan 13 20:41:07.338280 containerd[1481]: time="2025-01-13T20:41:07.337285579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:41:07.338944 containerd[1481]: time="2025-01-13T20:41:07.338575793Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:07.338944 containerd[1481]: time="2025-01-13T20:41:07.338886241Z" level=info msg="Ensure that sandbox 177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9 in task-service has been cleanup successfully" Jan 13 20:41:07.343298 containerd[1481]: time="2025-01-13T20:41:07.343096753Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:07.343298 containerd[1481]: time="2025-01-13T20:41:07.343146494Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:07.345316 systemd[1]: run-netns-cni\x2d366e3890\x2de200\x2df268\x2dda86\x2d43124eed6044.mount: Deactivated successfully. Jan 13 20:41:07.346700 containerd[1481]: time="2025-01-13T20:41:07.346116064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:1,}" Jan 13 20:41:07.439867 containerd[1481]: time="2025-01-13T20:41:07.439656660Z" level=error msg="Failed to destroy network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:07.440639 containerd[1481]: time="2025-01-13T20:41:07.440462532Z" level=error msg="encountered an error cleaning up failed sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:07.440639 containerd[1481]: time="2025-01-13T20:41:07.440587353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:07.441112 kubelet[1847]: E0113 20:41:07.441044 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:07.441303 kubelet[1847]: E0113 20:41:07.441154 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:07.441303 kubelet[1847]: E0113 20:41:07.441194 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:07.441303 kubelet[1847]: E0113 20:41:07.441276 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:07.623224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33-shm.mount: Deactivated successfully. Jan 13 20:41:08.142447 kubelet[1847]: E0113 20:41:08.142378 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:08.341017 kubelet[1847]: I0113 20:41:08.339911 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33" Jan 13 20:41:08.341310 containerd[1481]: time="2025-01-13T20:41:08.341079040Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:08.341832 containerd[1481]: time="2025-01-13T20:41:08.341671423Z" level=info msg="Ensure that sandbox 9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33 in task-service has been cleanup successfully" Jan 13 20:41:08.344939 containerd[1481]: time="2025-01-13T20:41:08.344902831Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:08.345115 systemd[1]: run-netns-cni\x2dc9d4ef0e\x2d4f46\x2d513a\x2db0fa\x2d0ed94a85eda7.mount: Deactivated successfully. Jan 13 20:41:08.345280 containerd[1481]: time="2025-01-13T20:41:08.345098935Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:08.348631 containerd[1481]: time="2025-01-13T20:41:08.347850558Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:08.348631 containerd[1481]: time="2025-01-13T20:41:08.348081778Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:08.348631 containerd[1481]: time="2025-01-13T20:41:08.348111915Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:08.350621 containerd[1481]: time="2025-01-13T20:41:08.350394947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:2,}" Jan 13 20:41:08.480686 systemd[1]: Created slice kubepods-besteffort-pod0c9d582e_358c_421f_9aca_e554a62d02ed.slice - libcontainer container kubepods-besteffort-pod0c9d582e_358c_421f_9aca_e554a62d02ed.slice. Jan 13 20:41:08.498576 containerd[1481]: time="2025-01-13T20:41:08.493949933Z" level=error msg="Failed to destroy network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:08.498576 containerd[1481]: time="2025-01-13T20:41:08.494423816Z" level=error msg="encountered an error cleaning up failed sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:08.498576 containerd[1481]: time="2025-01-13T20:41:08.494520035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:08.498831 kubelet[1847]: E0113 20:41:08.497011 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:08.498831 kubelet[1847]: E0113 20:41:08.497148 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:08.498831 kubelet[1847]: E0113 20:41:08.497206 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:08.498660 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c-shm.mount: Deactivated successfully. Jan 13 20:41:08.499160 kubelet[1847]: E0113 20:41:08.497297 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:08.550968 kubelet[1847]: I0113 20:41:08.550901 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqz6c\" (UniqueName: \"kubernetes.io/projected/0c9d582e-358c-421f-9aca-e554a62d02ed-kube-api-access-dqz6c\") pod \"nginx-deployment-8587fbcb89-qvgsx\" (UID: \"0c9d582e-358c-421f-9aca-e554a62d02ed\") " pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:08.788278 containerd[1481]: time="2025-01-13T20:41:08.787714441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:0,}" Jan 13 20:41:08.849516 systemd[1]: Started sshd@7-10.128.0.39:22-194.0.234.38:62010.service - OpenSSH per-connection server daemon (194.0.234.38:62010). Jan 13 20:41:08.940718 containerd[1481]: time="2025-01-13T20:41:08.940655244Z" level=error msg="Failed to destroy network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:08.943619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275-shm.mount: Deactivated successfully. Jan 13 20:41:08.946589 containerd[1481]: time="2025-01-13T20:41:08.946536342Z" level=error msg="encountered an error cleaning up failed sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:08.946707 containerd[1481]: time="2025-01-13T20:41:08.946634603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:08.947056 kubelet[1847]: E0113 20:41:08.946985 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:08.947133 kubelet[1847]: E0113 20:41:08.947059 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:08.947133 kubelet[1847]: E0113 20:41:08.947092 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:08.947252 kubelet[1847]: E0113 20:41:08.947146 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-qvgsx" podUID="0c9d582e-358c-421f-9aca-e554a62d02ed" Jan 13 20:41:09.142995 kubelet[1847]: E0113 20:41:09.142809 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:09.344041 kubelet[1847]: I0113 20:41:09.343999 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c" Jan 13 20:41:09.345702 containerd[1481]: time="2025-01-13T20:41:09.345150706Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:09.345702 containerd[1481]: time="2025-01-13T20:41:09.345508178Z" level=info msg="Ensure that sandbox c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c in task-service has been cleanup successfully" Jan 13 20:41:09.348696 kubelet[1847]: I0113 20:41:09.346972 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275" Jan 13 20:41:09.348832 containerd[1481]: time="2025-01-13T20:41:09.346877951Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:09.348832 containerd[1481]: time="2025-01-13T20:41:09.348684085Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:09.348832 containerd[1481]: time="2025-01-13T20:41:09.348635266Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:09.349496 systemd[1]: run-netns-cni\x2d504aa88a\x2ddb1d\x2d9ad1\x2d1a21\x2dc9143777ab83.mount: Deactivated successfully. Jan 13 20:41:09.351193 containerd[1481]: time="2025-01-13T20:41:09.350232580Z" level=info msg="Ensure that sandbox 83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275 in task-service has been cleanup successfully" Jan 13 20:41:09.351193 containerd[1481]: time="2025-01-13T20:41:09.350764858Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:09.351193 containerd[1481]: time="2025-01-13T20:41:09.351065834Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:09.351193 containerd[1481]: time="2025-01-13T20:41:09.351141734Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:09.351459 containerd[1481]: time="2025-01-13T20:41:09.351327120Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:09.351459 containerd[1481]: time="2025-01-13T20:41:09.351364957Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:09.353283 containerd[1481]: time="2025-01-13T20:41:09.352911272Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:09.353283 containerd[1481]: time="2025-01-13T20:41:09.353000622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:1,}" Jan 13 20:41:09.353283 containerd[1481]: time="2025-01-13T20:41:09.353060051Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:09.353283 containerd[1481]: time="2025-01-13T20:41:09.353085625Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:09.354581 containerd[1481]: time="2025-01-13T20:41:09.354547182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:3,}" Jan 13 20:41:09.687586 systemd[1]: run-netns-cni\x2d8bbb7151\x2d0ff2\x2d6311\x2d542b\x2d0e08bc1b2fa2.mount: Deactivated successfully. Jan 13 20:41:09.804803 containerd[1481]: time="2025-01-13T20:41:09.804458599Z" level=error msg="Failed to destroy network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:09.805499 containerd[1481]: time="2025-01-13T20:41:09.805336578Z" level=error msg="encountered an error cleaning up failed sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:09.805892 containerd[1481]: time="2025-01-13T20:41:09.805673823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:09.809646 kubelet[1847]: E0113 20:41:09.808948 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:09.809646 kubelet[1847]: E0113 20:41:09.809146 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:09.809646 kubelet[1847]: E0113 20:41:09.809201 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:09.809942 kubelet[1847]: E0113 20:41:09.809306 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:09.812630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1-shm.mount: Deactivated successfully. Jan 13 20:41:09.817062 containerd[1481]: time="2025-01-13T20:41:09.816833536Z" level=error msg="Failed to destroy network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:09.817811 containerd[1481]: time="2025-01-13T20:41:09.817602194Z" level=error msg="encountered an error cleaning up failed sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:09.817811 containerd[1481]: time="2025-01-13T20:41:09.817680855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:09.818702 kubelet[1847]: E0113 20:41:09.818221 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:09.818702 kubelet[1847]: E0113 20:41:09.818339 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:09.818702 kubelet[1847]: E0113 20:41:09.818402 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:09.818991 kubelet[1847]: E0113 20:41:09.818475 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-qvgsx" podUID="0c9d582e-358c-421f-9aca-e554a62d02ed" Jan 13 20:41:09.822684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94-shm.mount: Deactivated successfully. Jan 13 20:41:10.143793 kubelet[1847]: E0113 20:41:10.143743 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:10.354603 kubelet[1847]: I0113 20:41:10.354019 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1" Jan 13 20:41:10.355041 containerd[1481]: time="2025-01-13T20:41:10.355000751Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:10.357031 kubelet[1847]: I0113 20:41:10.356761 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94" Jan 13 20:41:10.357641 containerd[1481]: time="2025-01-13T20:41:10.357606840Z" level=info msg="Ensure that sandbox 8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1 in task-service has been cleanup successfully" Jan 13 20:41:10.359882 containerd[1481]: time="2025-01-13T20:41:10.357653056Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:10.363478 containerd[1481]: time="2025-01-13T20:41:10.360844899Z" level=info msg="Ensure that sandbox 31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94 in task-service has been cleanup successfully" Jan 13 20:41:10.361240 systemd[1]: run-netns-cni\x2d7c2704ab\x2d060b\x2d5e92\x2dc16c\x2dd19688ec5757.mount: Deactivated successfully. Jan 13 20:41:10.364263 containerd[1481]: time="2025-01-13T20:41:10.363833721Z" level=info msg="TearDown network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" successfully" Jan 13 20:41:10.364263 containerd[1481]: time="2025-01-13T20:41:10.363866222Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" returns successfully" Jan 13 20:41:10.365070 containerd[1481]: time="2025-01-13T20:41:10.365037770Z" level=info msg="TearDown network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" successfully" Jan 13 20:41:10.365070 containerd[1481]: time="2025-01-13T20:41:10.365069161Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" returns successfully" Jan 13 20:41:10.366981 systemd[1]: run-netns-cni\x2de0abf4d1\x2d37e7\x2dd71c\x2d1d50\x2d5b9ffa47d47b.mount: Deactivated successfully. Jan 13 20:41:10.369441 containerd[1481]: time="2025-01-13T20:41:10.369357411Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:10.370337 containerd[1481]: time="2025-01-13T20:41:10.370043253Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:10.370337 containerd[1481]: time="2025-01-13T20:41:10.370069281Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:10.370337 containerd[1481]: time="2025-01-13T20:41:10.370185636Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:10.370337 containerd[1481]: time="2025-01-13T20:41:10.370273110Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:10.370337 containerd[1481]: time="2025-01-13T20:41:10.370283034Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:10.372598 containerd[1481]: time="2025-01-13T20:41:10.372164212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:2,}" Jan 13 20:41:10.372785 containerd[1481]: time="2025-01-13T20:41:10.372758428Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:10.373009 containerd[1481]: time="2025-01-13T20:41:10.372985940Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:10.373153 containerd[1481]: time="2025-01-13T20:41:10.373110702Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:10.373978 containerd[1481]: time="2025-01-13T20:41:10.373931735Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:10.375052 containerd[1481]: time="2025-01-13T20:41:10.375018703Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:10.375052 containerd[1481]: time="2025-01-13T20:41:10.375050680Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:10.375996 containerd[1481]: time="2025-01-13T20:41:10.375810081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:4,}" Jan 13 20:41:10.428864 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 20:41:10.558416 containerd[1481]: time="2025-01-13T20:41:10.558295500Z" level=error msg="Failed to destroy network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:10.558796 containerd[1481]: time="2025-01-13T20:41:10.558738508Z" level=error msg="encountered an error cleaning up failed sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:10.558886 containerd[1481]: time="2025-01-13T20:41:10.558828266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:10.559662 kubelet[1847]: E0113 20:41:10.559100 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:10.559662 kubelet[1847]: E0113 20:41:10.559170 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:10.559662 kubelet[1847]: E0113 20:41:10.559201 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:10.559898 kubelet[1847]: E0113 20:41:10.559256 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-qvgsx" podUID="0c9d582e-358c-421f-9aca-e554a62d02ed" Jan 13 20:41:10.586804 containerd[1481]: time="2025-01-13T20:41:10.586613052Z" level=error msg="Failed to destroy network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:10.589445 containerd[1481]: time="2025-01-13T20:41:10.589245634Z" level=error msg="encountered an error cleaning up failed sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:10.589445 containerd[1481]: time="2025-01-13T20:41:10.589347159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:10.589822 kubelet[1847]: E0113 20:41:10.589609 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:10.589822 kubelet[1847]: E0113 20:41:10.589678 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:10.589822 kubelet[1847]: E0113 20:41:10.589711 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:10.590359 kubelet[1847]: E0113 20:41:10.589782 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:11.144291 kubelet[1847]: E0113 20:41:11.144237 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:11.364326 kubelet[1847]: I0113 20:41:11.364246 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5" Jan 13 20:41:11.365512 containerd[1481]: time="2025-01-13T20:41:11.364889072Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" Jan 13 20:41:11.365512 containerd[1481]: time="2025-01-13T20:41:11.365273598Z" level=info msg="Ensure that sandbox e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5 in task-service has been cleanup successfully" Jan 13 20:41:11.366602 containerd[1481]: time="2025-01-13T20:41:11.366559362Z" level=info msg="TearDown network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" successfully" Jan 13 20:41:11.366747 containerd[1481]: time="2025-01-13T20:41:11.366723998Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" returns successfully" Jan 13 20:41:11.367214 containerd[1481]: time="2025-01-13T20:41:11.367184862Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:11.367489 containerd[1481]: time="2025-01-13T20:41:11.367462631Z" level=info msg="TearDown network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" successfully" Jan 13 20:41:11.367925 containerd[1481]: time="2025-01-13T20:41:11.367681631Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" returns successfully" Jan 13 20:41:11.368365 containerd[1481]: time="2025-01-13T20:41:11.368169925Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:11.368365 containerd[1481]: time="2025-01-13T20:41:11.368285268Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:11.368365 containerd[1481]: time="2025-01-13T20:41:11.368303173Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:11.369492 containerd[1481]: time="2025-01-13T20:41:11.369462670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:3,}" Jan 13 20:41:11.372553 systemd[1]: run-netns-cni\x2d3fcf05eb\x2d391e\x2d7644\x2dd79b\x2d8e920a08d30a.mount: Deactivated successfully. Jan 13 20:41:11.377801 kubelet[1847]: I0113 20:41:11.377726 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8" Jan 13 20:41:11.379715 containerd[1481]: time="2025-01-13T20:41:11.379676554Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" Jan 13 20:41:11.379977 containerd[1481]: time="2025-01-13T20:41:11.379906517Z" level=info msg="Ensure that sandbox 602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8 in task-service has been cleanup successfully" Jan 13 20:41:11.380313 containerd[1481]: time="2025-01-13T20:41:11.380201159Z" level=info msg="TearDown network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" successfully" Jan 13 20:41:11.380313 containerd[1481]: time="2025-01-13T20:41:11.380228113Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" returns successfully" Jan 13 20:41:11.382750 containerd[1481]: time="2025-01-13T20:41:11.382477292Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:11.382750 containerd[1481]: time="2025-01-13T20:41:11.382599652Z" level=info msg="TearDown network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" successfully" Jan 13 20:41:11.382750 containerd[1481]: time="2025-01-13T20:41:11.382617213Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" returns successfully" Jan 13 20:41:11.384823 systemd[1]: run-netns-cni\x2dfe9675e2\x2d30b0\x2dbf3f\x2da3c0\x2dc8fa49a51bb4.mount: Deactivated successfully. Jan 13 20:41:11.386528 containerd[1481]: time="2025-01-13T20:41:11.386464958Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:11.387225 containerd[1481]: time="2025-01-13T20:41:11.386610360Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:11.387225 containerd[1481]: time="2025-01-13T20:41:11.386632740Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:11.388091 containerd[1481]: time="2025-01-13T20:41:11.388040556Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:11.388737 containerd[1481]: time="2025-01-13T20:41:11.388591245Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:11.388737 containerd[1481]: time="2025-01-13T20:41:11.388619662Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:11.389554 containerd[1481]: time="2025-01-13T20:41:11.389373567Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:11.389554 containerd[1481]: time="2025-01-13T20:41:11.389483208Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:11.389554 containerd[1481]: time="2025-01-13T20:41:11.389500763Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:11.390372 containerd[1481]: time="2025-01-13T20:41:11.390343277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:5,}" Jan 13 20:41:11.563843 containerd[1481]: time="2025-01-13T20:41:11.563785945Z" level=error msg="Failed to destroy network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:11.564456 containerd[1481]: time="2025-01-13T20:41:11.564416028Z" level=error msg="encountered an error cleaning up failed sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:11.564658 containerd[1481]: time="2025-01-13T20:41:11.564627976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:11.565374 kubelet[1847]: E0113 20:41:11.565311 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:11.565493 kubelet[1847]: E0113 20:41:11.565409 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:11.565493 kubelet[1847]: E0113 20:41:11.565444 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:11.565595 kubelet[1847]: E0113 20:41:11.565501 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-qvgsx" podUID="0c9d582e-358c-421f-9aca-e554a62d02ed" Jan 13 20:41:11.585917 containerd[1481]: time="2025-01-13T20:41:11.585862314Z" level=error msg="Failed to destroy network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:11.586604 containerd[1481]: time="2025-01-13T20:41:11.586561149Z" level=error msg="encountered an error cleaning up failed sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:11.586914 containerd[1481]: time="2025-01-13T20:41:11.586877483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:11.587432 kubelet[1847]: E0113 20:41:11.587374 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:11.587885 kubelet[1847]: E0113 20:41:11.587825 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:11.588094 kubelet[1847]: E0113 20:41:11.588067 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:11.588327 kubelet[1847]: E0113 20:41:11.588292 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:11.670865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0-shm.mount: Deactivated successfully. Jan 13 20:41:12.145364 kubelet[1847]: E0113 20:41:12.145217 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:12.190596 sshd[2406]: Invalid user ubnt from 194.0.234.38 port 62010 Jan 13 20:41:12.384306 kubelet[1847]: I0113 20:41:12.383555 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda" Jan 13 20:41:12.384773 containerd[1481]: time="2025-01-13T20:41:12.384538668Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\"" Jan 13 20:41:12.385792 containerd[1481]: time="2025-01-13T20:41:12.384812141Z" level=info msg="Ensure that sandbox 8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda in task-service has been cleanup successfully" Jan 13 20:41:12.387804 containerd[1481]: time="2025-01-13T20:41:12.387764174Z" level=info msg="TearDown network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" successfully" Jan 13 20:41:12.387804 containerd[1481]: time="2025-01-13T20:41:12.387800112Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" returns successfully" Jan 13 20:41:12.389066 containerd[1481]: time="2025-01-13T20:41:12.389019901Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" Jan 13 20:41:12.389161 containerd[1481]: time="2025-01-13T20:41:12.389144015Z" level=info msg="TearDown network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" successfully" Jan 13 20:41:12.389220 containerd[1481]: time="2025-01-13T20:41:12.389162843Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" returns successfully" Jan 13 20:41:12.389691 systemd[1]: run-netns-cni\x2da0ed0ff9\x2d01b7\x2db437\x2dd722\x2d38f4bc6666fe.mount: Deactivated successfully. Jan 13 20:41:12.390384 containerd[1481]: time="2025-01-13T20:41:12.390075708Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:12.390384 containerd[1481]: time="2025-01-13T20:41:12.390191051Z" level=info msg="TearDown network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" successfully" Jan 13 20:41:12.390384 containerd[1481]: time="2025-01-13T20:41:12.390209981Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" returns successfully" Jan 13 20:41:12.391316 containerd[1481]: time="2025-01-13T20:41:12.391249212Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:12.391910 containerd[1481]: time="2025-01-13T20:41:12.391531556Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:12.391910 containerd[1481]: time="2025-01-13T20:41:12.391552123Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:12.392209 containerd[1481]: time="2025-01-13T20:41:12.392180207Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:12.394073 containerd[1481]: time="2025-01-13T20:41:12.392298589Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:12.394073 containerd[1481]: time="2025-01-13T20:41:12.392320384Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:12.394073 containerd[1481]: time="2025-01-13T20:41:12.393601288Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:12.394073 containerd[1481]: time="2025-01-13T20:41:12.394020450Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:12.394309 kubelet[1847]: I0113 20:41:12.392591 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0" Jan 13 20:41:12.394380 containerd[1481]: time="2025-01-13T20:41:12.394239794Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:12.394380 containerd[1481]: time="2025-01-13T20:41:12.394346341Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\"" Jan 13 20:41:12.394672 containerd[1481]: time="2025-01-13T20:41:12.394638171Z" level=info msg="Ensure that sandbox 02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0 in task-service has been cleanup successfully" Jan 13 20:41:12.397038 containerd[1481]: time="2025-01-13T20:41:12.395777654Z" level=info msg="TearDown network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" successfully" Jan 13 20:41:12.397038 containerd[1481]: time="2025-01-13T20:41:12.395806977Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" returns successfully" Jan 13 20:41:12.399670 containerd[1481]: time="2025-01-13T20:41:12.399634797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:6,}" Jan 13 20:41:12.401175 containerd[1481]: time="2025-01-13T20:41:12.400053198Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" Jan 13 20:41:12.401175 containerd[1481]: time="2025-01-13T20:41:12.400170722Z" level=info msg="TearDown network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" successfully" Jan 13 20:41:12.401175 containerd[1481]: time="2025-01-13T20:41:12.400189946Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" returns successfully" Jan 13 20:41:12.401175 containerd[1481]: time="2025-01-13T20:41:12.401075129Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:12.400821 systemd[1]: run-netns-cni\x2d73be0233\x2d238f\x2df0dd\x2d80a1\x2d896db3df0c15.mount: Deactivated successfully. Jan 13 20:41:12.401546 containerd[1481]: time="2025-01-13T20:41:12.401192669Z" level=info msg="TearDown network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" successfully" Jan 13 20:41:12.401546 containerd[1481]: time="2025-01-13T20:41:12.401210749Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" returns successfully" Jan 13 20:41:12.401656 containerd[1481]: time="2025-01-13T20:41:12.401614632Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:12.402534 containerd[1481]: time="2025-01-13T20:41:12.401721808Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:12.402534 containerd[1481]: time="2025-01-13T20:41:12.401744591Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:12.403264 containerd[1481]: time="2025-01-13T20:41:12.403071110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:4,}" Jan 13 20:41:12.435177 sshd[2406]: Connection closed by invalid user ubnt 194.0.234.38 port 62010 [preauth] Jan 13 20:41:12.438285 systemd[1]: sshd@7-10.128.0.39:22-194.0.234.38:62010.service: Deactivated successfully. Jan 13 20:41:12.554639 containerd[1481]: time="2025-01-13T20:41:12.554466781Z" level=error msg="Failed to destroy network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:12.555563 containerd[1481]: time="2025-01-13T20:41:12.555416139Z" level=error msg="encountered an error cleaning up failed sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:12.555563 containerd[1481]: time="2025-01-13T20:41:12.555513159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:12.556093 kubelet[1847]: E0113 20:41:12.555770 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:12.556093 kubelet[1847]: E0113 20:41:12.555841 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:12.556093 kubelet[1847]: E0113 20:41:12.555875 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:12.556295 kubelet[1847]: E0113 20:41:12.555970 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:12.607154 containerd[1481]: time="2025-01-13T20:41:12.606682400Z" level=error msg="Failed to destroy network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:12.607154 containerd[1481]: time="2025-01-13T20:41:12.607131229Z" level=error msg="encountered an error cleaning up failed sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:12.607437 containerd[1481]: time="2025-01-13T20:41:12.607226233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:12.607554 kubelet[1847]: E0113 20:41:12.607487 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:12.607623 kubelet[1847]: E0113 20:41:12.607561 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:12.607623 kubelet[1847]: E0113 20:41:12.607599 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:12.607731 kubelet[1847]: E0113 20:41:12.607656 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-qvgsx" podUID="0c9d582e-358c-421f-9aca-e554a62d02ed" Jan 13 20:41:12.672170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad-shm.mount: Deactivated successfully. Jan 13 20:41:13.146373 kubelet[1847]: E0113 20:41:13.146285 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:13.403005 kubelet[1847]: I0113 20:41:13.402468 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad" Jan 13 20:41:13.405130 containerd[1481]: time="2025-01-13T20:41:13.405084915Z" level=info msg="StopPodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\"" Jan 13 20:41:13.405989 containerd[1481]: time="2025-01-13T20:41:13.405370496Z" level=info msg="Ensure that sandbox 214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad in task-service has been cleanup successfully" Jan 13 20:41:13.405989 containerd[1481]: time="2025-01-13T20:41:13.405588832Z" level=info msg="TearDown network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" successfully" Jan 13 20:41:13.405989 containerd[1481]: time="2025-01-13T20:41:13.405611061Z" level=info msg="StopPodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" returns successfully" Jan 13 20:41:13.408706 containerd[1481]: time="2025-01-13T20:41:13.408660446Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\"" Jan 13 20:41:13.408814 containerd[1481]: time="2025-01-13T20:41:13.408784797Z" level=info msg="TearDown network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" successfully" Jan 13 20:41:13.408814 containerd[1481]: time="2025-01-13T20:41:13.408802938Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" returns successfully" Jan 13 20:41:13.411047 containerd[1481]: time="2025-01-13T20:41:13.410572548Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" Jan 13 20:41:13.411047 containerd[1481]: time="2025-01-13T20:41:13.410680344Z" level=info msg="TearDown network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" successfully" Jan 13 20:41:13.411047 containerd[1481]: time="2025-01-13T20:41:13.410695475Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" returns successfully" Jan 13 20:41:13.410391 systemd[1]: run-netns-cni\x2dcdd1895f\x2d8b01\x2d3a39\x2dc256\x2dadc5d69898c3.mount: Deactivated successfully. Jan 13 20:41:13.413222 containerd[1481]: time="2025-01-13T20:41:13.412397451Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:13.413222 containerd[1481]: time="2025-01-13T20:41:13.412515655Z" level=info msg="TearDown network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" successfully" Jan 13 20:41:13.413222 containerd[1481]: time="2025-01-13T20:41:13.412532597Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" returns successfully" Jan 13 20:41:13.413222 containerd[1481]: time="2025-01-13T20:41:13.412946538Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:13.413222 containerd[1481]: time="2025-01-13T20:41:13.413076274Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:13.413222 containerd[1481]: time="2025-01-13T20:41:13.413093638Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:13.414772 containerd[1481]: time="2025-01-13T20:41:13.414127475Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:13.414772 containerd[1481]: time="2025-01-13T20:41:13.414233297Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:13.414772 containerd[1481]: time="2025-01-13T20:41:13.414250920Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:13.415729 containerd[1481]: time="2025-01-13T20:41:13.415644222Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:13.415823 containerd[1481]: time="2025-01-13T20:41:13.415779788Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:13.415823 containerd[1481]: time="2025-01-13T20:41:13.415799422Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:13.416516 kubelet[1847]: I0113 20:41:13.416230 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc" Jan 13 20:41:13.417440 containerd[1481]: time="2025-01-13T20:41:13.417392986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:7,}" Jan 13 20:41:13.422142 containerd[1481]: time="2025-01-13T20:41:13.422098792Z" level=info msg="StopPodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\"" Jan 13 20:41:13.422441 containerd[1481]: time="2025-01-13T20:41:13.422393267Z" level=info msg="Ensure that sandbox e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc in task-service has been cleanup successfully" Jan 13 20:41:13.422821 containerd[1481]: time="2025-01-13T20:41:13.422614324Z" level=info msg="TearDown network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" successfully" Jan 13 20:41:13.422821 containerd[1481]: time="2025-01-13T20:41:13.422640447Z" level=info msg="StopPodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" returns successfully" Jan 13 20:41:13.427376 containerd[1481]: time="2025-01-13T20:41:13.426401000Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\"" Jan 13 20:41:13.427376 containerd[1481]: time="2025-01-13T20:41:13.426520134Z" level=info msg="TearDown network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" successfully" Jan 13 20:41:13.427376 containerd[1481]: time="2025-01-13T20:41:13.426538498Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" returns successfully" Jan 13 20:41:13.426657 systemd[1]: run-netns-cni\x2de7530a15\x2d09b9\x2d0cde\x2dc3ca\x2d307fe086ded5.mount: Deactivated successfully. Jan 13 20:41:13.429182 containerd[1481]: time="2025-01-13T20:41:13.428253320Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" Jan 13 20:41:13.429182 containerd[1481]: time="2025-01-13T20:41:13.428373663Z" level=info msg="TearDown network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" successfully" Jan 13 20:41:13.429182 containerd[1481]: time="2025-01-13T20:41:13.428392018Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" returns successfully" Jan 13 20:41:13.430885 containerd[1481]: time="2025-01-13T20:41:13.430850809Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:13.431279 containerd[1481]: time="2025-01-13T20:41:13.431011206Z" level=info msg="TearDown network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" successfully" Jan 13 20:41:13.431279 containerd[1481]: time="2025-01-13T20:41:13.431042684Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" returns successfully" Jan 13 20:41:13.432155 containerd[1481]: time="2025-01-13T20:41:13.431783145Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:13.432155 containerd[1481]: time="2025-01-13T20:41:13.431936406Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:13.432155 containerd[1481]: time="2025-01-13T20:41:13.431984803Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:13.433814 containerd[1481]: time="2025-01-13T20:41:13.433165796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:5,}" Jan 13 20:41:13.589983 containerd[1481]: time="2025-01-13T20:41:13.589900590Z" level=error msg="Failed to destroy network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:13.591461 containerd[1481]: time="2025-01-13T20:41:13.591300886Z" level=error msg="encountered an error cleaning up failed sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:13.591461 containerd[1481]: time="2025-01-13T20:41:13.591393881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:13.591823 kubelet[1847]: E0113 20:41:13.591662 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:13.591823 kubelet[1847]: E0113 20:41:13.591734 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:13.591823 kubelet[1847]: E0113 20:41:13.591777 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:13.592346 kubelet[1847]: E0113 20:41:13.591833 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:13.627435 containerd[1481]: time="2025-01-13T20:41:13.627138298Z" level=error msg="Failed to destroy network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:13.628791 containerd[1481]: time="2025-01-13T20:41:13.628629368Z" level=error msg="encountered an error cleaning up failed sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:13.628791 containerd[1481]: time="2025-01-13T20:41:13.628719220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:13.629446 kubelet[1847]: E0113 20:41:13.629072 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:13.629446 kubelet[1847]: E0113 20:41:13.629143 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:13.629446 kubelet[1847]: E0113 20:41:13.629176 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:13.630273 kubelet[1847]: E0113 20:41:13.629236 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-qvgsx" podUID="0c9d582e-358c-421f-9aca-e554a62d02ed" Jan 13 20:41:13.673398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958-shm.mount: Deactivated successfully. Jan 13 20:41:14.147991 kubelet[1847]: E0113 20:41:14.147080 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:14.425011 kubelet[1847]: I0113 20:41:14.424036 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958" Jan 13 20:41:14.425169 containerd[1481]: time="2025-01-13T20:41:14.424896980Z" level=info msg="StopPodSandbox for \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\"" Jan 13 20:41:14.425636 containerd[1481]: time="2025-01-13T20:41:14.425278165Z" level=info msg="Ensure that sandbox a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958 in task-service has been cleanup successfully" Jan 13 20:41:14.426316 containerd[1481]: time="2025-01-13T20:41:14.425825425Z" level=info msg="TearDown network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\" successfully" Jan 13 20:41:14.426316 containerd[1481]: time="2025-01-13T20:41:14.425858363Z" level=info msg="StopPodSandbox for \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\" returns successfully" Jan 13 20:41:14.426988 containerd[1481]: time="2025-01-13T20:41:14.426932361Z" level=info msg="StopPodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\"" Jan 13 20:41:14.427432 containerd[1481]: time="2025-01-13T20:41:14.427204973Z" level=info msg="TearDown network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" successfully" Jan 13 20:41:14.427432 containerd[1481]: time="2025-01-13T20:41:14.427230413Z" level=info msg="StopPodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" returns successfully" Jan 13 20:41:14.428554 containerd[1481]: time="2025-01-13T20:41:14.428156817Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\"" Jan 13 20:41:14.428554 containerd[1481]: time="2025-01-13T20:41:14.428269711Z" level=info msg="TearDown network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" successfully" Jan 13 20:41:14.428554 containerd[1481]: time="2025-01-13T20:41:14.428285953Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" returns successfully" Jan 13 20:41:14.430382 containerd[1481]: time="2025-01-13T20:41:14.430315714Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" Jan 13 20:41:14.430540 containerd[1481]: time="2025-01-13T20:41:14.430425687Z" level=info msg="TearDown network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" successfully" Jan 13 20:41:14.430540 containerd[1481]: time="2025-01-13T20:41:14.430494169Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" returns successfully" Jan 13 20:41:14.431453 containerd[1481]: time="2025-01-13T20:41:14.431245259Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:14.431453 containerd[1481]: time="2025-01-13T20:41:14.431373433Z" level=info msg="TearDown network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" successfully" Jan 13 20:41:14.431453 containerd[1481]: time="2025-01-13T20:41:14.431390760Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" returns successfully" Jan 13 20:41:14.432810 containerd[1481]: time="2025-01-13T20:41:14.432460142Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:14.432810 containerd[1481]: time="2025-01-13T20:41:14.432571093Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:14.432810 containerd[1481]: time="2025-01-13T20:41:14.432587740Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:14.433148 systemd[1]: run-netns-cni\x2d3e43fb1b\x2d0a06\x2d0d9a\x2dddad\x2d9dd25a3d14aa.mount: Deactivated successfully. Jan 13 20:41:14.434382 containerd[1481]: time="2025-01-13T20:41:14.434109230Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:14.434382 containerd[1481]: time="2025-01-13T20:41:14.434215491Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:14.434382 containerd[1481]: time="2025-01-13T20:41:14.434232525Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:14.435680 containerd[1481]: time="2025-01-13T20:41:14.435387526Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:14.435680 containerd[1481]: time="2025-01-13T20:41:14.435494548Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:14.435680 containerd[1481]: time="2025-01-13T20:41:14.435512889Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:14.438188 containerd[1481]: time="2025-01-13T20:41:14.437482903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:8,}" Jan 13 20:41:14.440121 kubelet[1847]: I0113 20:41:14.439230 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652" Jan 13 20:41:14.440806 containerd[1481]: time="2025-01-13T20:41:14.440766110Z" level=info msg="StopPodSandbox for \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\"" Jan 13 20:41:14.441188 containerd[1481]: time="2025-01-13T20:41:14.441157044Z" level=info msg="Ensure that sandbox 32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652 in task-service has been cleanup successfully" Jan 13 20:41:14.443974 containerd[1481]: time="2025-01-13T20:41:14.441494129Z" level=info msg="TearDown network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\" successfully" Jan 13 20:41:14.444239 containerd[1481]: time="2025-01-13T20:41:14.444104085Z" level=info msg="StopPodSandbox for \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\" returns successfully" Jan 13 20:41:14.445040 containerd[1481]: time="2025-01-13T20:41:14.444542791Z" level=info msg="StopPodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\"" Jan 13 20:41:14.445040 containerd[1481]: time="2025-01-13T20:41:14.444650729Z" level=info msg="TearDown network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" successfully" Jan 13 20:41:14.445040 containerd[1481]: time="2025-01-13T20:41:14.444667782Z" level=info msg="StopPodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" returns successfully" Jan 13 20:41:14.444820 systemd[1]: run-netns-cni\x2da6600e12\x2d30c2\x2d8d47\x2d08bb\x2d798e8edc09f3.mount: Deactivated successfully. Jan 13 20:41:14.445674 containerd[1481]: time="2025-01-13T20:41:14.445647257Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\"" Jan 13 20:41:14.446847 containerd[1481]: time="2025-01-13T20:41:14.446250670Z" level=info msg="TearDown network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" successfully" Jan 13 20:41:14.446847 containerd[1481]: time="2025-01-13T20:41:14.446276161Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" returns successfully" Jan 13 20:41:14.448124 containerd[1481]: time="2025-01-13T20:41:14.448092956Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" Jan 13 20:41:14.448399 containerd[1481]: time="2025-01-13T20:41:14.448375932Z" level=info msg="TearDown network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" successfully" Jan 13 20:41:14.448499 containerd[1481]: time="2025-01-13T20:41:14.448481901Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" returns successfully" Jan 13 20:41:14.452544 containerd[1481]: time="2025-01-13T20:41:14.452448762Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:14.452664 containerd[1481]: time="2025-01-13T20:41:14.452630643Z" level=info msg="TearDown network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" successfully" Jan 13 20:41:14.452664 containerd[1481]: time="2025-01-13T20:41:14.452649275Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" returns successfully" Jan 13 20:41:14.453105 containerd[1481]: time="2025-01-13T20:41:14.453074894Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:14.453205 containerd[1481]: time="2025-01-13T20:41:14.453186594Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:14.453265 containerd[1481]: time="2025-01-13T20:41:14.453204002Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:14.456395 containerd[1481]: time="2025-01-13T20:41:14.456217090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:6,}" Jan 13 20:41:14.628488 containerd[1481]: time="2025-01-13T20:41:14.628346356Z" level=error msg="Failed to destroy network for sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:14.629316 containerd[1481]: time="2025-01-13T20:41:14.629155736Z" level=error msg="encountered an error cleaning up failed sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:14.629316 containerd[1481]: time="2025-01-13T20:41:14.629272807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:14.630048 kubelet[1847]: E0113 20:41:14.629844 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:14.630048 kubelet[1847]: E0113 20:41:14.629919 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:14.630607 kubelet[1847]: E0113 20:41:14.630357 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4fr6x" Jan 13 20:41:14.630881 kubelet[1847]: E0113 20:41:14.630490 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4fr6x_calico-system(6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4fr6x" podUID="6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e" Jan 13 20:41:14.638011 containerd[1481]: time="2025-01-13T20:41:14.637608741Z" level=error msg="Failed to destroy network for sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:14.638202 containerd[1481]: time="2025-01-13T20:41:14.638166700Z" level=error msg="encountered an error cleaning up failed sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:14.638370 containerd[1481]: time="2025-01-13T20:41:14.638340376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:14.639158 kubelet[1847]: E0113 20:41:14.638731 1847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:41:14.639158 kubelet[1847]: E0113 20:41:14.638791 1847 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:14.639158 kubelet[1847]: E0113 20:41:14.638822 1847 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-qvgsx" Jan 13 20:41:14.639399 kubelet[1847]: E0113 20:41:14.638883 1847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-qvgsx_default(0c9d582e-358c-421f-9aca-e554a62d02ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-qvgsx" podUID="0c9d582e-358c-421f-9aca-e554a62d02ed" Jan 13 20:41:14.658103 containerd[1481]: time="2025-01-13T20:41:14.658047167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:14.659042 containerd[1481]: time="2025-01-13T20:41:14.658945911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 13 20:41:14.660722 containerd[1481]: time="2025-01-13T20:41:14.660657930Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:14.663427 containerd[1481]: time="2025-01-13T20:41:14.663366834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:14.664352 containerd[1481]: time="2025-01-13T20:41:14.664181296Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.326839057s" Jan 13 20:41:14.664352 containerd[1481]: time="2025-01-13T20:41:14.664222798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 13 20:41:14.671333 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428-shm.mount: Deactivated successfully. Jan 13 20:41:14.671732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1377262552.mount: Deactivated successfully. Jan 13 20:41:14.684811 containerd[1481]: time="2025-01-13T20:41:14.682935526Z" level=info msg="CreateContainer within sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:41:14.707111 containerd[1481]: time="2025-01-13T20:41:14.707055037Z" level=info msg="CreateContainer within sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\"" Jan 13 20:41:14.708009 containerd[1481]: time="2025-01-13T20:41:14.707852448Z" level=info msg="StartContainer for \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\"" Jan 13 20:41:14.756205 systemd[1]: Started cri-containerd-59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49.scope - libcontainer container 59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49. Jan 13 20:41:14.798135 containerd[1481]: time="2025-01-13T20:41:14.798039785Z" level=info msg="StartContainer for \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\" returns successfully" Jan 13 20:41:14.899372 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:41:14.899524 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:41:15.134986 kubelet[1847]: E0113 20:41:15.134912 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:15.147695 kubelet[1847]: E0113 20:41:15.147575 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:15.447395 kubelet[1847]: I0113 20:41:15.447263 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46" Jan 13 20:41:15.448835 containerd[1481]: time="2025-01-13T20:41:15.448788258Z" level=info msg="StopPodSandbox for \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\"" Jan 13 20:41:15.451504 containerd[1481]: time="2025-01-13T20:41:15.449979552Z" level=info msg="Ensure that sandbox e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46 in task-service has been cleanup successfully" Jan 13 20:41:15.451504 containerd[1481]: time="2025-01-13T20:41:15.450241642Z" level=info msg="TearDown network for sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\" successfully" Jan 13 20:41:15.451504 containerd[1481]: time="2025-01-13T20:41:15.450261271Z" level=info msg="StopPodSandbox for \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\" returns successfully" Jan 13 20:41:15.451504 containerd[1481]: time="2025-01-13T20:41:15.450925377Z" level=info msg="StopPodSandbox for \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\"" Jan 13 20:41:15.451504 containerd[1481]: time="2025-01-13T20:41:15.451064940Z" level=info msg="TearDown network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\" successfully" Jan 13 20:41:15.451504 containerd[1481]: time="2025-01-13T20:41:15.451085131Z" level=info msg="StopPodSandbox for \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\" returns successfully" Jan 13 20:41:15.453659 containerd[1481]: time="2025-01-13T20:41:15.452550521Z" level=info msg="StopPodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\"" Jan 13 20:41:15.453659 containerd[1481]: time="2025-01-13T20:41:15.452847902Z" level=info msg="TearDown network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" successfully" Jan 13 20:41:15.453659 containerd[1481]: time="2025-01-13T20:41:15.452996351Z" level=info msg="StopPodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" returns successfully" Jan 13 20:41:15.454287 containerd[1481]: time="2025-01-13T20:41:15.453572551Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\"" Jan 13 20:41:15.454472 systemd[1]: Created slice kubepods-besteffort-pod6e996783_8727_476d_aada_d8e341c8b40e.slice - libcontainer container kubepods-besteffort-pod6e996783_8727_476d_aada_d8e341c8b40e.slice. Jan 13 20:41:15.455057 containerd[1481]: time="2025-01-13T20:41:15.454256720Z" level=info msg="TearDown network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" successfully" Jan 13 20:41:15.455057 containerd[1481]: time="2025-01-13T20:41:15.454512372Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" returns successfully" Jan 13 20:41:15.457082 containerd[1481]: time="2025-01-13T20:41:15.456045990Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" Jan 13 20:41:15.457082 containerd[1481]: time="2025-01-13T20:41:15.456164996Z" level=info msg="TearDown network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" successfully" Jan 13 20:41:15.457082 containerd[1481]: time="2025-01-13T20:41:15.456184112Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" returns successfully" Jan 13 20:41:15.457801 containerd[1481]: time="2025-01-13T20:41:15.457474950Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:15.457801 containerd[1481]: time="2025-01-13T20:41:15.457682475Z" level=info msg="TearDown network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" successfully" Jan 13 20:41:15.457801 containerd[1481]: time="2025-01-13T20:41:15.457704477Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" returns successfully" Jan 13 20:41:15.458811 containerd[1481]: time="2025-01-13T20:41:15.458568438Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:15.458811 containerd[1481]: time="2025-01-13T20:41:15.458695753Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:15.458811 containerd[1481]: time="2025-01-13T20:41:15.458713789Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:15.459948 containerd[1481]: time="2025-01-13T20:41:15.459474378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:7,}" Jan 13 20:41:15.463804 kubelet[1847]: I0113 20:41:15.463756 1847 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428" Jan 13 20:41:15.465087 containerd[1481]: time="2025-01-13T20:41:15.464591808Z" level=info msg="StopPodSandbox for \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\"" Jan 13 20:41:15.465087 containerd[1481]: time="2025-01-13T20:41:15.464895638Z" level=info msg="Ensure that sandbox f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428 in task-service has been cleanup successfully" Jan 13 20:41:15.465335 containerd[1481]: time="2025-01-13T20:41:15.465295565Z" level=info msg="TearDown network for sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\" successfully" Jan 13 20:41:15.465505 containerd[1481]: time="2025-01-13T20:41:15.465436676Z" level=info msg="StopPodSandbox for \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\" returns successfully" Jan 13 20:41:15.466057 containerd[1481]: time="2025-01-13T20:41:15.465936947Z" level=info msg="StopPodSandbox for \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\"" Jan 13 20:41:15.466260 containerd[1481]: time="2025-01-13T20:41:15.466177972Z" level=info msg="TearDown network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\" successfully" Jan 13 20:41:15.466422 containerd[1481]: time="2025-01-13T20:41:15.466347900Z" level=info msg="StopPodSandbox for \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\" returns successfully" Jan 13 20:41:15.466882 containerd[1481]: time="2025-01-13T20:41:15.466790777Z" level=info msg="StopPodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\"" Jan 13 20:41:15.467086 containerd[1481]: time="2025-01-13T20:41:15.466907363Z" level=info msg="TearDown network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" successfully" Jan 13 20:41:15.467086 containerd[1481]: time="2025-01-13T20:41:15.466926069Z" level=info msg="StopPodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" returns successfully" Jan 13 20:41:15.467457 containerd[1481]: time="2025-01-13T20:41:15.467387957Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\"" Jan 13 20:41:15.467518 containerd[1481]: time="2025-01-13T20:41:15.467492241Z" level=info msg="TearDown network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" successfully" Jan 13 20:41:15.467518 containerd[1481]: time="2025-01-13T20:41:15.467509265Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" returns successfully" Jan 13 20:41:15.467931 containerd[1481]: time="2025-01-13T20:41:15.467824638Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" Jan 13 20:41:15.468268 containerd[1481]: time="2025-01-13T20:41:15.467935305Z" level=info msg="TearDown network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" successfully" Jan 13 20:41:15.468268 containerd[1481]: time="2025-01-13T20:41:15.468004955Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" returns successfully" Jan 13 20:41:15.468765 containerd[1481]: time="2025-01-13T20:41:15.468625299Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:15.468765 containerd[1481]: time="2025-01-13T20:41:15.468749743Z" level=info msg="TearDown network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" successfully" Jan 13 20:41:15.468911 containerd[1481]: time="2025-01-13T20:41:15.468769001Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" returns successfully" Jan 13 20:41:15.469117 containerd[1481]: time="2025-01-13T20:41:15.469087164Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:15.469313 containerd[1481]: time="2025-01-13T20:41:15.469205907Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:15.469313 containerd[1481]: time="2025-01-13T20:41:15.469224345Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:15.469638 containerd[1481]: time="2025-01-13T20:41:15.469598203Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:15.469789 containerd[1481]: time="2025-01-13T20:41:15.469716249Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:15.469789 containerd[1481]: time="2025-01-13T20:41:15.469740624Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:15.470204 containerd[1481]: time="2025-01-13T20:41:15.470175494Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:15.470468 containerd[1481]: time="2025-01-13T20:41:15.470285131Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:15.470468 containerd[1481]: time="2025-01-13T20:41:15.470304260Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:15.471882 containerd[1481]: time="2025-01-13T20:41:15.471110178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:9,}" Jan 13 20:41:15.500227 kubelet[1847]: I0113 20:41:15.500122 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpzz7\" (UniqueName: \"kubernetes.io/projected/6e996783-8727-476d-aada-d8e341c8b40e-kube-api-access-dpzz7\") pod \"calico-typha-5778476d56-87vtk\" (UID: \"6e996783-8727-476d-aada-d8e341c8b40e\") " pod="calico-system/calico-typha-5778476d56-87vtk" Jan 13 20:41:15.502236 kubelet[1847]: I0113 20:41:15.502103 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e996783-8727-476d-aada-d8e341c8b40e-tigera-ca-bundle\") pod \"calico-typha-5778476d56-87vtk\" (UID: \"6e996783-8727-476d-aada-d8e341c8b40e\") " pod="calico-system/calico-typha-5778476d56-87vtk" Jan 13 20:41:15.502236 kubelet[1847]: I0113 20:41:15.502218 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6e996783-8727-476d-aada-d8e341c8b40e-typha-certs\") pod \"calico-typha-5778476d56-87vtk\" (UID: \"6e996783-8727-476d-aada-d8e341c8b40e\") " pod="calico-system/calico-typha-5778476d56-87vtk" Jan 13 20:41:15.633908 kubelet[1847]: I0113 20:41:15.633835 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mp9v9" podStartSLOduration=3.365407374 podStartE2EDuration="20.633807204s" podCreationTimestamp="2025-01-13 20:40:55 +0000 UTC" firstStartedPulling="2025-01-13 20:40:57.396920167 +0000 UTC m=+3.702643250" lastFinishedPulling="2025-01-13 20:41:14.665320002 +0000 UTC m=+20.971043080" observedRunningTime="2025-01-13 20:41:15.523492645 +0000 UTC m=+21.829215735" watchObservedRunningTime="2025-01-13 20:41:15.633807204 +0000 UTC m=+21.939530285" Jan 13 20:41:15.677834 systemd[1]: run-netns-cni\x2d2ee7fbf6\x2dd99c\x2d5532\x2dd0b3\x2d0bc3e926ceb6.mount: Deactivated successfully. Jan 13 20:41:15.678011 systemd[1]: run-netns-cni\x2d776d2718\x2d2568\x2d82ec\x2de4ad\x2d5371ab5e13ef.mount: Deactivated successfully. Jan 13 20:41:15.759749 containerd[1481]: time="2025-01-13T20:41:15.759573228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5778476d56-87vtk,Uid:6e996783-8727-476d-aada-d8e341c8b40e,Namespace:calico-system,Attempt:0,}" Jan 13 20:41:15.796499 containerd[1481]: time="2025-01-13T20:41:15.796200322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:15.796499 containerd[1481]: time="2025-01-13T20:41:15.796283727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:15.796499 containerd[1481]: time="2025-01-13T20:41:15.796308893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:15.796904 containerd[1481]: time="2025-01-13T20:41:15.796429751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:15.842300 systemd[1]: Started cri-containerd-dedb8f9c866b5787b2c12ed1006a900d0ffe285c6da5689937affe865f512b15.scope - libcontainer container dedb8f9c866b5787b2c12ed1006a900d0ffe285c6da5689937affe865f512b15. Jan 13 20:41:15.901785 containerd[1481]: time="2025-01-13T20:41:15.901595302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5778476d56-87vtk,Uid:6e996783-8727-476d-aada-d8e341c8b40e,Namespace:calico-system,Attempt:0,} returns sandbox id \"dedb8f9c866b5787b2c12ed1006a900d0ffe285c6da5689937affe865f512b15\"" Jan 13 20:41:15.906295 containerd[1481]: time="2025-01-13T20:41:15.906207428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:41:15.914183 systemd-networkd[1409]: calie93962d5d24: Link UP Jan 13 20:41:15.915878 systemd-networkd[1409]: calie93962d5d24: Gained carrier Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.544 [INFO][2889] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.566 [INFO][2889] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.39-k8s-csi--node--driver--4fr6x-eth0 csi-node-driver- calico-system 6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e 1031 0 2025-01-13 20:40:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.128.0.39 csi-node-driver-4fr6x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie93962d5d24 [] []}} ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Namespace="calico-system" Pod="csi-node-driver-4fr6x" WorkloadEndpoint="10.128.0.39-k8s-csi--node--driver--4fr6x-" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.566 [INFO][2889] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Namespace="calico-system" Pod="csi-node-driver-4fr6x" WorkloadEndpoint="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.625 [INFO][2904] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" HandleID="k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Workload="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.743 [INFO][2904] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" HandleID="k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Workload="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.128.0.39", "pod":"csi-node-driver-4fr6x", "timestamp":"2025-01-13 20:41:15.624987815 +0000 UTC"}, Hostname:"10.128.0.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.744 [INFO][2904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.744 [INFO][2904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.744 [INFO][2904] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.39' Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.751 [INFO][2904] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.844 [INFO][2904] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.853 [INFO][2904] ipam/ipam.go 489: Trying affinity for 192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.856 [INFO][2904] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.863 [INFO][2904] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.863 [INFO][2904] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.866 [INFO][2904] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7 Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.877 [INFO][2904] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.898 [INFO][2904] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.65/26] block=192.168.126.64/26 handle="k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.898 [INFO][2904] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.65/26] handle="k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" host="10.128.0.39" Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.898 [INFO][2904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:41:15.935153 containerd[1481]: 2025-01-13 20:41:15.898 [INFO][2904] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.65/26] IPv6=[] ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" HandleID="k8s-pod-network.e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Workload="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" Jan 13 20:41:15.936389 containerd[1481]: 2025-01-13 20:41:15.902 [INFO][2889] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Namespace="calico-system" Pod="csi-node-driver-4fr6x" WorkloadEndpoint="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-csi--node--driver--4fr6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"", Pod:"csi-node-driver-4fr6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie93962d5d24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:41:15.936389 containerd[1481]: 2025-01-13 20:41:15.902 [INFO][2889] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.65/32] ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Namespace="calico-system" Pod="csi-node-driver-4fr6x" WorkloadEndpoint="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" Jan 13 20:41:15.936389 containerd[1481]: 2025-01-13 20:41:15.903 [INFO][2889] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie93962d5d24 ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Namespace="calico-system" Pod="csi-node-driver-4fr6x" WorkloadEndpoint="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" Jan 13 20:41:15.936389 containerd[1481]: 2025-01-13 20:41:15.916 [INFO][2889] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Namespace="calico-system" Pod="csi-node-driver-4fr6x" WorkloadEndpoint="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" Jan 13 20:41:15.936389 containerd[1481]: 2025-01-13 20:41:15.916 [INFO][2889] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Namespace="calico-system" Pod="csi-node-driver-4fr6x" WorkloadEndpoint="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-csi--node--driver--4fr6x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 40, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7", Pod:"csi-node-driver-4fr6x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie93962d5d24", MAC:"f2:d0:6c:65:cb:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:41:15.936389 containerd[1481]: 2025-01-13 20:41:15.932 [INFO][2889] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7" Namespace="calico-system" Pod="csi-node-driver-4fr6x" WorkloadEndpoint="10.128.0.39-k8s-csi--node--driver--4fr6x-eth0" Jan 13 20:41:15.973711 containerd[1481]: time="2025-01-13T20:41:15.967598304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:15.973711 containerd[1481]: time="2025-01-13T20:41:15.967681201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:15.973711 containerd[1481]: time="2025-01-13T20:41:15.967709560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:15.973711 containerd[1481]: time="2025-01-13T20:41:15.967830622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:16.001511 systemd[1]: Started cri-containerd-e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7.scope - libcontainer container e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7. Jan 13 20:41:16.004475 systemd-networkd[1409]: cali456db4cc544: Link UP Jan 13 20:41:16.004822 systemd-networkd[1409]: cali456db4cc544: Gained carrier Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.533 [INFO][2878] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.565 [INFO][2878] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0 nginx-deployment-8587fbcb89- default 0c9d582e-358c-421f-9aca-e554a62d02ed 1105 0 2025-01-13 20:41:08 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.128.0.39 nginx-deployment-8587fbcb89-qvgsx eth0 default [] [] [kns.default ksa.default.default] cali456db4cc544 [] []}} ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Namespace="default" Pod="nginx-deployment-8587fbcb89-qvgsx" WorkloadEndpoint="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.565 [INFO][2878] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Namespace="default" Pod="nginx-deployment-8587fbcb89-qvgsx" WorkloadEndpoint="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.629 [INFO][2908] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" HandleID="k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Workload="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.843 [INFO][2908] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" HandleID="k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Workload="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907f0), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.39", "pod":"nginx-deployment-8587fbcb89-qvgsx", "timestamp":"2025-01-13 20:41:15.629549885 +0000 UTC"}, Hostname:"10.128.0.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.844 [INFO][2908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.899 [INFO][2908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.899 [INFO][2908] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.39' Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.905 [INFO][2908] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.948 [INFO][2908] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.963 [INFO][2908] ipam/ipam.go 489: Trying affinity for 192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.967 [INFO][2908] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.970 [INFO][2908] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.970 [INFO][2908] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.974 [INFO][2908] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64 Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.982 [INFO][2908] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.992 [INFO][2908] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.66/26] block=192.168.126.64/26 handle="k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.992 [INFO][2908] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.66/26] handle="k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" host="10.128.0.39" Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.992 [INFO][2908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:41:16.024185 containerd[1481]: 2025-01-13 20:41:15.992 [INFO][2908] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.66/26] IPv6=[] ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" HandleID="k8s-pod-network.646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Workload="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" Jan 13 20:41:16.025371 containerd[1481]: 2025-01-13 20:41:15.999 [INFO][2878] cni-plugin/k8s.go 386: Populated endpoint ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Namespace="default" Pod="nginx-deployment-8587fbcb89-qvgsx" WorkloadEndpoint="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"0c9d582e-358c-421f-9aca-e554a62d02ed", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-qvgsx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali456db4cc544", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:41:16.025371 containerd[1481]: 2025-01-13 20:41:15.999 [INFO][2878] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.66/32] ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Namespace="default" Pod="nginx-deployment-8587fbcb89-qvgsx" WorkloadEndpoint="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" Jan 13 20:41:16.025371 containerd[1481]: 2025-01-13 20:41:15.999 [INFO][2878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali456db4cc544 ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Namespace="default" Pod="nginx-deployment-8587fbcb89-qvgsx" WorkloadEndpoint="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" Jan 13 20:41:16.025371 containerd[1481]: 2025-01-13 20:41:16.005 [INFO][2878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Namespace="default" Pod="nginx-deployment-8587fbcb89-qvgsx" WorkloadEndpoint="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" Jan 13 20:41:16.025371 containerd[1481]: 2025-01-13 20:41:16.007 [INFO][2878] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Namespace="default" Pod="nginx-deployment-8587fbcb89-qvgsx" WorkloadEndpoint="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"0c9d582e-358c-421f-9aca-e554a62d02ed", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64", Pod:"nginx-deployment-8587fbcb89-qvgsx", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali456db4cc544", MAC:"fe:51:92:9b:e4:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:41:16.025371 containerd[1481]: 2025-01-13 20:41:16.021 [INFO][2878] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64" Namespace="default" Pod="nginx-deployment-8587fbcb89-qvgsx" WorkloadEndpoint="10.128.0.39-k8s-nginx--deployment--8587fbcb89--qvgsx-eth0" Jan 13 20:41:16.058552 containerd[1481]: time="2025-01-13T20:41:16.058047107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4fr6x,Uid:6981a5c8-5fcb-4dab-9a47-f02b70fe4b4e,Namespace:calico-system,Attempt:9,} returns sandbox id \"e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7\"" Jan 13 20:41:16.069471 containerd[1481]: time="2025-01-13T20:41:16.068640842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:16.069471 containerd[1481]: time="2025-01-13T20:41:16.068715689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:16.069471 containerd[1481]: time="2025-01-13T20:41:16.068735070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:16.069471 containerd[1481]: time="2025-01-13T20:41:16.068855105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:16.096227 systemd[1]: Started cri-containerd-646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64.scope - libcontainer container 646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64. Jan 13 20:41:16.149490 kubelet[1847]: E0113 20:41:16.148383 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:16.150443 containerd[1481]: time="2025-01-13T20:41:16.150325092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-qvgsx,Uid:0c9d582e-358c-421f-9aca-e554a62d02ed,Namespace:default,Attempt:7,} returns sandbox id \"646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64\"" Jan 13 20:41:16.495216 kubelet[1847]: I0113 20:41:16.494758 1847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:41:16.597939 systemd[1]: Created slice kubepods-besteffort-podc5a6298e_a0fa_4117_b8f7_b16abb5d8db3.slice - libcontainer container kubepods-besteffort-podc5a6298e_a0fa_4117_b8f7_b16abb5d8db3.slice. Jan 13 20:41:16.606402 containerd[1481]: time="2025-01-13T20:41:16.606356376Z" level=info msg="StopContainer for \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\" with timeout 5 (s)" Jan 13 20:41:16.607347 containerd[1481]: time="2025-01-13T20:41:16.606796087Z" level=info msg="Stop container \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\" with signal terminated" Jan 13 20:41:16.613810 kubelet[1847]: I0113 20:41:16.613075 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdt9z\" (UniqueName: \"kubernetes.io/projected/c5a6298e-a0fa-4117-b8f7-b16abb5d8db3-kube-api-access-xdt9z\") pod \"calico-kube-controllers-59c49674bf-9srrw\" (UID: \"c5a6298e-a0fa-4117-b8f7-b16abb5d8db3\") " pod="calico-system/calico-kube-controllers-59c49674bf-9srrw" Jan 13 20:41:16.613810 kubelet[1847]: I0113 20:41:16.613126 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5a6298e-a0fa-4117-b8f7-b16abb5d8db3-tigera-ca-bundle\") pod \"calico-kube-controllers-59c49674bf-9srrw\" (UID: \"c5a6298e-a0fa-4117-b8f7-b16abb5d8db3\") " pod="calico-system/calico-kube-controllers-59c49674bf-9srrw" Jan 13 20:41:16.624922 systemd[1]: cri-containerd-59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49.scope: Deactivated successfully. Jan 13 20:41:16.674474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49-rootfs.mount: Deactivated successfully. Jan 13 20:41:16.902880 containerd[1481]: time="2025-01-13T20:41:16.902747235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59c49674bf-9srrw,Uid:c5a6298e-a0fa-4117-b8f7-b16abb5d8db3,Namespace:calico-system,Attempt:0,}" Jan 13 20:41:17.138312 systemd-networkd[1409]: calie93962d5d24: Gained IPv6LL Jan 13 20:41:17.150084 kubelet[1847]: E0113 20:41:17.150010 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:17.778277 systemd-networkd[1409]: cali456db4cc544: Gained IPv6LL Jan 13 20:41:18.152947 kubelet[1847]: E0113 20:41:18.151192 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:18.341832 containerd[1481]: time="2025-01-13T20:41:18.341499271Z" level=info msg="shim disconnected" id=59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49 namespace=k8s.io Jan 13 20:41:18.341832 containerd[1481]: time="2025-01-13T20:41:18.341579667Z" level=warning msg="cleaning up after shim disconnected" id=59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49 namespace=k8s.io Jan 13 20:41:18.341832 containerd[1481]: time="2025-01-13T20:41:18.341594489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:18.372277 containerd[1481]: time="2025-01-13T20:41:18.372081153Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:41:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:41:18.377044 containerd[1481]: time="2025-01-13T20:41:18.376999494Z" level=info msg="StopContainer for \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\" returns successfully" Jan 13 20:41:18.378759 containerd[1481]: time="2025-01-13T20:41:18.378365944Z" level=info msg="StopPodSandbox for \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\"" Jan 13 20:41:18.378759 containerd[1481]: time="2025-01-13T20:41:18.378411083Z" level=info msg="Container to stop \"06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:18.378759 containerd[1481]: time="2025-01-13T20:41:18.378461326Z" level=info msg="Container to stop \"f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:18.378759 containerd[1481]: time="2025-01-13T20:41:18.378478328Z" level=info msg="Container to stop \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:41:18.385455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6-shm.mount: Deactivated successfully. Jan 13 20:41:18.399492 systemd[1]: cri-containerd-a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6.scope: Deactivated successfully. Jan 13 20:41:18.458630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6-rootfs.mount: Deactivated successfully. Jan 13 20:41:18.549886 systemd-networkd[1409]: calib4fb178674e: Link UP Jan 13 20:41:18.552173 systemd-networkd[1409]: calib4fb178674e: Gained carrier Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.360 [INFO][3199] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.394 [INFO][3199] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0 calico-kube-controllers-59c49674bf- calico-system c5a6298e-a0fa-4117-b8f7-b16abb5d8db3 1244 0 2025-01-13 20:41:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59c49674bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 10.128.0.39 calico-kube-controllers-59c49674bf-9srrw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib4fb178674e [] []}} ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Namespace="calico-system" Pod="calico-kube-controllers-59c49674bf-9srrw" WorkloadEndpoint="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.394 [INFO][3199] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Namespace="calico-system" Pod="calico-kube-controllers-59c49674bf-9srrw" WorkloadEndpoint="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.481 [INFO][3236] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" HandleID="k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Workload="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.496 [INFO][3236] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" HandleID="k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Workload="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bccb0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.128.0.39", "pod":"calico-kube-controllers-59c49674bf-9srrw", "timestamp":"2025-01-13 20:41:18.481463557 +0000 UTC"}, Hostname:"10.128.0.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.497 [INFO][3236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.497 [INFO][3236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.497 [INFO][3236] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.39' Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.500 [INFO][3236] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.505 [INFO][3236] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.513 [INFO][3236] ipam/ipam.go 489: Trying affinity for 192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.518 [INFO][3236] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.523 [INFO][3236] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.524 [INFO][3236] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.527 [INFO][3236] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986 Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.532 [INFO][3236] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.540 [INFO][3236] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.67/26] block=192.168.126.64/26 handle="k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.541 [INFO][3236] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.67/26] handle="k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" host="10.128.0.39" Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.541 [INFO][3236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:41:18.573381 containerd[1481]: 2025-01-13 20:41:18.541 [INFO][3236] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.67/26] IPv6=[] ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" HandleID="k8s-pod-network.d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Workload="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" Jan 13 20:41:18.574654 containerd[1481]: 2025-01-13 20:41:18.545 [INFO][3199] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Namespace="calico-system" Pod="calico-kube-controllers-59c49674bf-9srrw" WorkloadEndpoint="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0", GenerateName:"calico-kube-controllers-59c49674bf-", Namespace:"calico-system", SelfLink:"", UID:"c5a6298e-a0fa-4117-b8f7-b16abb5d8db3", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59c49674bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"", Pod:"calico-kube-controllers-59c49674bf-9srrw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4fb178674e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:41:18.574654 containerd[1481]: 2025-01-13 20:41:18.545 [INFO][3199] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.67/32] ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Namespace="calico-system" Pod="calico-kube-controllers-59c49674bf-9srrw" WorkloadEndpoint="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" Jan 13 20:41:18.574654 containerd[1481]: 2025-01-13 20:41:18.545 [INFO][3199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4fb178674e ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Namespace="calico-system" Pod="calico-kube-controllers-59c49674bf-9srrw" WorkloadEndpoint="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" Jan 13 20:41:18.574654 containerd[1481]: 2025-01-13 20:41:18.552 [INFO][3199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Namespace="calico-system" Pod="calico-kube-controllers-59c49674bf-9srrw" WorkloadEndpoint="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" Jan 13 20:41:18.574654 containerd[1481]: 2025-01-13 20:41:18.553 [INFO][3199] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Namespace="calico-system" Pod="calico-kube-controllers-59c49674bf-9srrw" WorkloadEndpoint="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0", GenerateName:"calico-kube-controllers-59c49674bf-", Namespace:"calico-system", SelfLink:"", UID:"c5a6298e-a0fa-4117-b8f7-b16abb5d8db3", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 41, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59c49674bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986", Pod:"calico-kube-controllers-59c49674bf-9srrw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib4fb178674e", MAC:"b6:12:dd:b0:21:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:41:18.574654 containerd[1481]: 2025-01-13 20:41:18.569 [INFO][3199] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986" Namespace="calico-system" Pod="calico-kube-controllers-59c49674bf-9srrw" WorkloadEndpoint="10.128.0.39-k8s-calico--kube--controllers--59c49674bf--9srrw-eth0" Jan 13 20:41:18.629423 containerd[1481]: time="2025-01-13T20:41:18.629101484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:18.629423 containerd[1481]: time="2025-01-13T20:41:18.629354331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:18.629423 containerd[1481]: time="2025-01-13T20:41:18.629393031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:18.633337 containerd[1481]: time="2025-01-13T20:41:18.633059827Z" level=info msg="shim disconnected" id=a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6 namespace=k8s.io Jan 13 20:41:18.633337 containerd[1481]: time="2025-01-13T20:41:18.633132135Z" level=warning msg="cleaning up after shim disconnected" id=a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6 namespace=k8s.io Jan 13 20:41:18.633337 containerd[1481]: time="2025-01-13T20:41:18.633145825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:18.634209 containerd[1481]: time="2025-01-13T20:41:18.633269725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:18.679034 containerd[1481]: time="2025-01-13T20:41:18.678193294Z" level=info msg="TearDown network for sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" successfully" Jan 13 20:41:18.679034 containerd[1481]: time="2025-01-13T20:41:18.678260854Z" level=info msg="StopPodSandbox for \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" returns successfully" Jan 13 20:41:18.685327 systemd[1]: Started cri-containerd-d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986.scope - libcontainer container d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986. Jan 13 20:41:18.725639 containerd[1481]: time="2025-01-13T20:41:18.725581194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:18.727281 kubelet[1847]: I0113 20:41:18.727243 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-lib-modules\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727457 kubelet[1847]: I0113 20:41:18.727306 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-log-dir\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727457 kubelet[1847]: I0113 20:41:18.727348 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c249fcc-0413-4594-ad53-355fd7dd0193-node-certs\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727457 kubelet[1847]: I0113 20:41:18.727374 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-flexvol-driver-host\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727457 kubelet[1847]: I0113 20:41:18.727424 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c249fcc-0413-4594-ad53-355fd7dd0193-tigera-ca-bundle\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727457 kubelet[1847]: I0113 20:41:18.727450 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-bin-dir\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727713 kubelet[1847]: I0113 20:41:18.727483 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chhdz\" (UniqueName: \"kubernetes.io/projected/9c249fcc-0413-4594-ad53-355fd7dd0193-kube-api-access-chhdz\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727713 kubelet[1847]: I0113 20:41:18.727513 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-var-lib-calico\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727713 kubelet[1847]: I0113 20:41:18.727538 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-net-dir\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727713 kubelet[1847]: I0113 20:41:18.727565 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-var-run-calico\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727713 kubelet[1847]: I0113 20:41:18.727592 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-policysync\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.727713 kubelet[1847]: I0113 20:41:18.727621 1847 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-xtables-lock\") pod \"9c249fcc-0413-4594-ad53-355fd7dd0193\" (UID: \"9c249fcc-0413-4594-ad53-355fd7dd0193\") " Jan 13 20:41:18.728032 kubelet[1847]: I0113 20:41:18.727717 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.728032 kubelet[1847]: I0113 20:41:18.727769 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.728032 kubelet[1847]: I0113 20:41:18.727796 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.728192 containerd[1481]: time="2025-01-13T20:41:18.728052820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 13 20:41:18.729981 kubelet[1847]: I0113 20:41:18.728687 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.729981 kubelet[1847]: I0113 20:41:18.729089 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.729981 kubelet[1847]: I0113 20:41:18.729149 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.729981 kubelet[1847]: I0113 20:41:18.729184 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-policysync" (OuterVolumeSpecName: "policysync") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.732985 kubelet[1847]: I0113 20:41:18.730279 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.732985 kubelet[1847]: I0113 20:41:18.730279 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:41:18.733151 containerd[1481]: time="2025-01-13T20:41:18.730570666Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:18.741018 containerd[1481]: time="2025-01-13T20:41:18.739866326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.833606976s" Jan 13 20:41:18.741018 containerd[1481]: time="2025-01-13T20:41:18.739917162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 13 20:41:18.741199 containerd[1481]: time="2025-01-13T20:41:18.741053377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:18.744717 kubelet[1847]: E0113 20:41:18.744650 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c249fcc-0413-4594-ad53-355fd7dd0193" containerName="flexvol-driver" Jan 13 20:41:18.744717 kubelet[1847]: E0113 20:41:18.744692 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c249fcc-0413-4594-ad53-355fd7dd0193" containerName="install-cni" Jan 13 20:41:18.744717 kubelet[1847]: E0113 20:41:18.744703 1847 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9c249fcc-0413-4594-ad53-355fd7dd0193" containerName="calico-node" Jan 13 20:41:18.744916 kubelet[1847]: I0113 20:41:18.744733 1847 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c249fcc-0413-4594-ad53-355fd7dd0193" containerName="calico-node" Jan 13 20:41:18.752932 kubelet[1847]: I0113 20:41:18.748810 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c249fcc-0413-4594-ad53-355fd7dd0193-kube-api-access-chhdz" (OuterVolumeSpecName: "kube-api-access-chhdz") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "kube-api-access-chhdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:41:18.753918 kubelet[1847]: I0113 20:41:18.753758 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c249fcc-0413-4594-ad53-355fd7dd0193-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:41:18.754103 containerd[1481]: time="2025-01-13T20:41:18.754045589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:41:18.755611 kubelet[1847]: I0113 20:41:18.755577 1847 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c249fcc-0413-4594-ad53-355fd7dd0193-node-certs" (OuterVolumeSpecName: "node-certs") pod "9c249fcc-0413-4594-ad53-355fd7dd0193" (UID: "9c249fcc-0413-4594-ad53-355fd7dd0193"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:41:18.766605 systemd[1]: Created slice kubepods-besteffort-pode28220a8_0495_47a5_b33b_4ae579662337.slice - libcontainer container kubepods-besteffort-pode28220a8_0495_47a5_b33b_4ae579662337.slice. Jan 13 20:41:18.774562 containerd[1481]: time="2025-01-13T20:41:18.774525356Z" level=info msg="CreateContainer within sandbox \"dedb8f9c866b5787b2c12ed1006a900d0ffe285c6da5689937affe865f512b15\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:41:18.786271 containerd[1481]: time="2025-01-13T20:41:18.786204694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59c49674bf-9srrw,Uid:c5a6298e-a0fa-4117-b8f7-b16abb5d8db3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986\"" Jan 13 20:41:18.795776 containerd[1481]: time="2025-01-13T20:41:18.795729826Z" level=info msg="CreateContainer within sandbox \"dedb8f9c866b5787b2c12ed1006a900d0ffe285c6da5689937affe865f512b15\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"abcba9df00e4b931fa2673ffb285b8cd2252100284b6912d994273bb3bba4b20\"" Jan 13 20:41:18.796341 containerd[1481]: time="2025-01-13T20:41:18.796309381Z" level=info msg="StartContainer for \"abcba9df00e4b931fa2673ffb285b8cd2252100284b6912d994273bb3bba4b20\"" Jan 13 20:41:18.828094 kubelet[1847]: I0113 20:41:18.828057 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-lib-modules\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.828384 kubelet[1847]: I0113 20:41:18.828358 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-xtables-lock\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829183 kubelet[1847]: I0113 20:41:18.828517 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-policysync\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829183 kubelet[1847]: I0113 20:41:18.828561 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-cni-net-dir\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829183 kubelet[1847]: I0113 20:41:18.828593 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e28220a8-0495-47a5-b33b-4ae579662337-node-certs\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829183 kubelet[1847]: I0113 20:41:18.828623 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e28220a8-0495-47a5-b33b-4ae579662337-tigera-ca-bundle\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829183 kubelet[1847]: I0113 20:41:18.828655 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-cni-bin-dir\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829514 kubelet[1847]: I0113 20:41:18.828684 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-var-run-calico\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829514 kubelet[1847]: I0113 20:41:18.828711 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-var-lib-calico\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829514 kubelet[1847]: I0113 20:41:18.828745 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fj9\" (UniqueName: \"kubernetes.io/projected/e28220a8-0495-47a5-b33b-4ae579662337-kube-api-access-t7fj9\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829514 kubelet[1847]: I0113 20:41:18.828778 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-cni-log-dir\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829514 kubelet[1847]: I0113 20:41:18.828811 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e28220a8-0495-47a5-b33b-4ae579662337-flexvol-driver-host\") pod \"calico-node-nssfz\" (UID: \"e28220a8-0495-47a5-b33b-4ae579662337\") " pod="calico-system/calico-node-nssfz" Jan 13 20:41:18.829514 kubelet[1847]: I0113 20:41:18.828849 1847 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-log-dir\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.829842 kubelet[1847]: I0113 20:41:18.828866 1847 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c249fcc-0413-4594-ad53-355fd7dd0193-node-certs\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.829842 kubelet[1847]: I0113 20:41:18.828885 1847 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c249fcc-0413-4594-ad53-355fd7dd0193-tigera-ca-bundle\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.829842 kubelet[1847]: I0113 20:41:18.828902 1847 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-bin-dir\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.829842 kubelet[1847]: I0113 20:41:18.828917 1847 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-chhdz\" (UniqueName: \"kubernetes.io/projected/9c249fcc-0413-4594-ad53-355fd7dd0193-kube-api-access-chhdz\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.829842 kubelet[1847]: I0113 20:41:18.828933 1847 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-flexvol-driver-host\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.829842 kubelet[1847]: I0113 20:41:18.828976 1847 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-var-lib-calico\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.829842 kubelet[1847]: I0113 20:41:18.828993 1847 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-var-run-calico\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.829842 kubelet[1847]: I0113 20:41:18.829006 1847 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-policysync\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.830245 kubelet[1847]: I0113 20:41:18.829020 1847 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-xtables-lock\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.830245 kubelet[1847]: I0113 20:41:18.829035 1847 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-cni-net-dir\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.830245 kubelet[1847]: I0113 20:41:18.829051 1847 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c249fcc-0413-4594-ad53-355fd7dd0193-lib-modules\") on node \"10.128.0.39\" DevicePath \"\"" Jan 13 20:41:18.834205 systemd[1]: Started cri-containerd-abcba9df00e4b931fa2673ffb285b8cd2252100284b6912d994273bb3bba4b20.scope - libcontainer container abcba9df00e4b931fa2673ffb285b8cd2252100284b6912d994273bb3bba4b20. Jan 13 20:41:18.888256 containerd[1481]: time="2025-01-13T20:41:18.888206831Z" level=info msg="StartContainer for \"abcba9df00e4b931fa2673ffb285b8cd2252100284b6912d994273bb3bba4b20\" returns successfully" Jan 13 20:41:19.078882 containerd[1481]: time="2025-01-13T20:41:19.078803148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nssfz,Uid:e28220a8-0495-47a5-b33b-4ae579662337,Namespace:calico-system,Attempt:0,}" Jan 13 20:41:19.110936 containerd[1481]: time="2025-01-13T20:41:19.110477688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:19.110936 containerd[1481]: time="2025-01-13T20:41:19.110622537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:19.110936 containerd[1481]: time="2025-01-13T20:41:19.110654360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:19.114262 containerd[1481]: time="2025-01-13T20:41:19.110790984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:19.139231 systemd[1]: Started cri-containerd-2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f.scope - libcontainer container 2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f. Jan 13 20:41:19.153007 kubelet[1847]: E0113 20:41:19.152928 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:19.173890 containerd[1481]: time="2025-01-13T20:41:19.173828695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nssfz,Uid:e28220a8-0495-47a5-b33b-4ae579662337,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f\"" Jan 13 20:41:19.177586 containerd[1481]: time="2025-01-13T20:41:19.177520530Z" level=info msg="CreateContainer within sandbox \"2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:41:19.201562 containerd[1481]: time="2025-01-13T20:41:19.201394807Z" level=info msg="CreateContainer within sandbox \"2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0\"" Jan 13 20:41:19.202201 containerd[1481]: time="2025-01-13T20:41:19.202137464Z" level=info msg="StartContainer for \"ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0\"" Jan 13 20:41:19.241199 systemd[1]: Started cri-containerd-ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0.scope - libcontainer container ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0. Jan 13 20:41:19.279640 systemd[1]: var-lib-kubelet-pods-9c249fcc\x2d0413\x2d4594\x2dad53\x2d355fd7dd0193-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 13 20:41:19.279823 systemd[1]: var-lib-kubelet-pods-9c249fcc\x2d0413\x2d4594\x2dad53\x2d355fd7dd0193-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dchhdz.mount: Deactivated successfully. Jan 13 20:41:19.279937 systemd[1]: var-lib-kubelet-pods-9c249fcc\x2d0413\x2d4594\x2dad53\x2d355fd7dd0193-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 13 20:41:19.299940 systemd[1]: Removed slice kubepods-besteffort-pod9c249fcc_0413_4594_ad53_355fd7dd0193.slice - libcontainer container kubepods-besteffort-pod9c249fcc_0413_4594_ad53_355fd7dd0193.slice. Jan 13 20:41:19.300149 systemd[1]: kubepods-besteffort-pod9c249fcc_0413_4594_ad53_355fd7dd0193.slice: Consumed 1.420s CPU time. Jan 13 20:41:19.312856 containerd[1481]: time="2025-01-13T20:41:19.312799953Z" level=info msg="StartContainer for \"ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0\" returns successfully" Jan 13 20:41:19.329342 systemd[1]: cri-containerd-ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0.scope: Deactivated successfully. Jan 13 20:41:19.365935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0-rootfs.mount: Deactivated successfully. Jan 13 20:41:19.381555 containerd[1481]: time="2025-01-13T20:41:19.381486177Z" level=info msg="shim disconnected" id=ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0 namespace=k8s.io Jan 13 20:41:19.382131 containerd[1481]: time="2025-01-13T20:41:19.381850083Z" level=warning msg="cleaning up after shim disconnected" id=ed82cb7f4817154c0b9e7aef618b4ef9cd63353113289b8e74f76b29492a60f0 namespace=k8s.io Jan 13 20:41:19.382131 containerd[1481]: time="2025-01-13T20:41:19.381888957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:19.530236 containerd[1481]: time="2025-01-13T20:41:19.528246125Z" level=info msg="CreateContainer within sandbox \"2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:41:19.559002 kubelet[1847]: I0113 20:41:19.555869 1847 scope.go:117] "RemoveContainer" containerID="59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49" Jan 13 20:41:19.568349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242355589.mount: Deactivated successfully. Jan 13 20:41:19.572302 kubelet[1847]: I0113 20:41:19.572137 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5778476d56-87vtk" podStartSLOduration=1.7301131079999998 podStartE2EDuration="4.57203419s" podCreationTimestamp="2025-01-13 20:41:15 +0000 UTC" firstStartedPulling="2025-01-13 20:41:15.904201819 +0000 UTC m=+22.209924901" lastFinishedPulling="2025-01-13 20:41:18.746122906 +0000 UTC m=+25.051845983" observedRunningTime="2025-01-13 20:41:19.544895759 +0000 UTC m=+25.850618912" watchObservedRunningTime="2025-01-13 20:41:19.57203419 +0000 UTC m=+25.877757293" Jan 13 20:41:19.577729 containerd[1481]: time="2025-01-13T20:41:19.577627929Z" level=info msg="CreateContainer within sandbox \"2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b\"" Jan 13 20:41:19.578610 containerd[1481]: time="2025-01-13T20:41:19.578573716Z" level=info msg="StartContainer for \"a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b\"" Jan 13 20:41:19.585884 containerd[1481]: time="2025-01-13T20:41:19.585204596Z" level=info msg="RemoveContainer for \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\"" Jan 13 20:41:19.591996 containerd[1481]: time="2025-01-13T20:41:19.591891063Z" level=info msg="RemoveContainer for \"59e4cb402b97d0ada18c7e595f7a76cf90e57fabf5b177e96e306c7346e2ea49\" returns successfully" Jan 13 20:41:19.593429 kubelet[1847]: I0113 20:41:19.593398 1847 scope.go:117] "RemoveContainer" containerID="f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463" Jan 13 20:41:19.597048 containerd[1481]: time="2025-01-13T20:41:19.596969360Z" level=info msg="RemoveContainer for \"f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463\"" Jan 13 20:41:19.601709 containerd[1481]: time="2025-01-13T20:41:19.601672941Z" level=info msg="RemoveContainer for \"f35bf54f06ff42699ab9a242ea14270fdb34748d367bdf352d617495c21ff463\" returns successfully" Jan 13 20:41:19.603157 kubelet[1847]: I0113 20:41:19.603056 1847 scope.go:117] "RemoveContainer" containerID="06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4" Jan 13 20:41:19.606057 containerd[1481]: time="2025-01-13T20:41:19.605925897Z" level=info msg="RemoveContainer for \"06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4\"" Jan 13 20:41:19.638388 containerd[1481]: time="2025-01-13T20:41:19.638288086Z" level=info msg="RemoveContainer for \"06ed49f6a8e4cbaa0d82da91c18b577ee04bcbf207b767bcfd516e8ad11015d4\" returns successfully" Jan 13 20:41:19.639181 systemd[1]: Started cri-containerd-a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b.scope - libcontainer container a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b. Jan 13 20:41:19.711001 containerd[1481]: time="2025-01-13T20:41:19.710783792Z" level=info msg="StartContainer for \"a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b\" returns successfully" Jan 13 20:41:19.948266 containerd[1481]: time="2025-01-13T20:41:19.948122754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:19.951349 containerd[1481]: time="2025-01-13T20:41:19.951234605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 13 20:41:19.953148 containerd[1481]: time="2025-01-13T20:41:19.953101876Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:19.959589 containerd[1481]: time="2025-01-13T20:41:19.959200846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:19.961038 containerd[1481]: time="2025-01-13T20:41:19.961000662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.20691017s" Jan 13 20:41:19.961193 containerd[1481]: time="2025-01-13T20:41:19.961169915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 13 20:41:19.963782 containerd[1481]: time="2025-01-13T20:41:19.963478840Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:41:19.965457 containerd[1481]: time="2025-01-13T20:41:19.965006371Z" level=info msg="CreateContainer within sandbox \"e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:41:19.992847 containerd[1481]: time="2025-01-13T20:41:19.992785802Z" level=info msg="CreateContainer within sandbox \"e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"579b5042af82833c0b589c03ce5c160df1c15aa6215a2f04d72ae5bfdc841e4d\"" Jan 13 20:41:19.994124 containerd[1481]: time="2025-01-13T20:41:19.994087005Z" level=info msg="StartContainer for \"579b5042af82833c0b589c03ce5c160df1c15aa6215a2f04d72ae5bfdc841e4d\"" Jan 13 20:41:20.046017 systemd[1]: Started cri-containerd-579b5042af82833c0b589c03ce5c160df1c15aa6215a2f04d72ae5bfdc841e4d.scope - libcontainer container 579b5042af82833c0b589c03ce5c160df1c15aa6215a2f04d72ae5bfdc841e4d. Jan 13 20:41:20.107150 containerd[1481]: time="2025-01-13T20:41:20.107010635Z" level=info msg="StartContainer for \"579b5042af82833c0b589c03ce5c160df1c15aa6215a2f04d72ae5bfdc841e4d\" returns successfully" Jan 13 20:41:20.155024 kubelet[1847]: E0113 20:41:20.153975 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:20.375388 systemd[1]: cri-containerd-a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b.scope: Deactivated successfully. Jan 13 20:41:20.409943 containerd[1481]: time="2025-01-13T20:41:20.408474895Z" level=info msg="shim disconnected" id=a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b namespace=k8s.io Jan 13 20:41:20.409943 containerd[1481]: time="2025-01-13T20:41:20.408549014Z" level=warning msg="cleaning up after shim disconnected" id=a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b namespace=k8s.io Jan 13 20:41:20.409943 containerd[1481]: time="2025-01-13T20:41:20.408565258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:41:20.411122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a055c7e83871d690da833559bdef796e5821a34a0e8a0d6e557680f8e974b36b-rootfs.mount: Deactivated successfully. Jan 13 20:41:20.530273 systemd-networkd[1409]: calib4fb178674e: Gained IPv6LL Jan 13 20:41:20.568547 kubelet[1847]: I0113 20:41:20.568498 1847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:41:20.590000 containerd[1481]: time="2025-01-13T20:41:20.588658935Z" level=info msg="CreateContainer within sandbox \"2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:41:20.616406 containerd[1481]: time="2025-01-13T20:41:20.616351251Z" level=info msg="CreateContainer within sandbox \"2b9a59eb857891de57b4dd27061f28fda18689c7b66619dcaddbd4ecb18a961f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1c0791a84b043010b092b5b4d49c55716f88543c0262b7acfcc32f59782c7c62\"" Jan 13 20:41:20.617576 containerd[1481]: time="2025-01-13T20:41:20.617542143Z" level=info msg="StartContainer for \"1c0791a84b043010b092b5b4d49c55716f88543c0262b7acfcc32f59782c7c62\"" Jan 13 20:41:20.664180 systemd[1]: Started cri-containerd-1c0791a84b043010b092b5b4d49c55716f88543c0262b7acfcc32f59782c7c62.scope - libcontainer container 1c0791a84b043010b092b5b4d49c55716f88543c0262b7acfcc32f59782c7c62. Jan 13 20:41:20.717220 containerd[1481]: time="2025-01-13T20:41:20.717165608Z" level=info msg="StartContainer for \"1c0791a84b043010b092b5b4d49c55716f88543c0262b7acfcc32f59782c7c62\" returns successfully" Jan 13 20:41:21.154337 kubelet[1847]: E0113 20:41:21.154292 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:21.288213 kubelet[1847]: I0113 20:41:21.288172 1847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c249fcc-0413-4594-ad53-355fd7dd0193" path="/var/lib/kubelet/pods/9c249fcc-0413-4594-ad53-355fd7dd0193/volumes" Jan 13 20:41:22.156006 kubelet[1847]: E0113 20:41:22.154827 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:22.702861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422438903.mount: Deactivated successfully. Jan 13 20:41:22.944967 kubelet[1847]: I0113 20:41:22.944905 1847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:41:23.155824 kubelet[1847]: E0113 20:41:23.155534 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:23.474271 ntpd[1452]: Listen normally on 8 calie93962d5d24 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 13 20:41:23.474837 ntpd[1452]: 13 Jan 20:41:23 ntpd[1452]: Listen normally on 8 calie93962d5d24 [fe80::ecee:eeff:feee:eeee%3]:123 Jan 13 20:41:23.474837 ntpd[1452]: 13 Jan 20:41:23 ntpd[1452]: Listen normally on 9 cali456db4cc544 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 20:41:23.474837 ntpd[1452]: 13 Jan 20:41:23 ntpd[1452]: Listen normally on 10 calib4fb178674e [fe80::ecee:eeff:feee:eeee%5]:123 Jan 13 20:41:23.474600 ntpd[1452]: Listen normally on 9 cali456db4cc544 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 13 20:41:23.474678 ntpd[1452]: Listen normally on 10 calib4fb178674e [fe80::ecee:eeff:feee:eeee%5]:123 Jan 13 20:41:24.156655 kubelet[1847]: E0113 20:41:24.156557 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:24.426489 containerd[1481]: time="2025-01-13T20:41:24.426028446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:24.427721 containerd[1481]: time="2025-01-13T20:41:24.427660141Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036018" Jan 13 20:41:24.429055 containerd[1481]: time="2025-01-13T20:41:24.428988055Z" level=info msg="ImageCreate event name:\"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:24.434508 containerd[1481]: time="2025-01-13T20:41:24.434441293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:24.436331 containerd[1481]: time="2025-01-13T20:41:24.436151895Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 4.472631128s" Jan 13 20:41:24.436331 containerd[1481]: time="2025-01-13T20:41:24.436198727Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:41:24.438133 containerd[1481]: time="2025-01-13T20:41:24.438035905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:41:24.439836 containerd[1481]: time="2025-01-13T20:41:24.439799167Z" level=info msg="CreateContainer within sandbox \"646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:41:24.459122 containerd[1481]: time="2025-01-13T20:41:24.459073898Z" level=info msg="CreateContainer within sandbox \"646750e39a304e22668beebd0400139fc9f18c63887875b801313c5939b84d64\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"bcc979a6606d4641240e0a4b9bbd2e0200732600ad0bc5e4c4f818483d94adc0\"" Jan 13 20:41:24.459787 containerd[1481]: time="2025-01-13T20:41:24.459748230Z" level=info msg="StartContainer for \"bcc979a6606d4641240e0a4b9bbd2e0200732600ad0bc5e4c4f818483d94adc0\"" Jan 13 20:41:24.508205 systemd[1]: Started cri-containerd-bcc979a6606d4641240e0a4b9bbd2e0200732600ad0bc5e4c4f818483d94adc0.scope - libcontainer container bcc979a6606d4641240e0a4b9bbd2e0200732600ad0bc5e4c4f818483d94adc0. Jan 13 20:41:24.542655 containerd[1481]: time="2025-01-13T20:41:24.542563505Z" level=info msg="StartContainer for \"bcc979a6606d4641240e0a4b9bbd2e0200732600ad0bc5e4c4f818483d94adc0\" returns successfully" Jan 13 20:41:24.602644 kubelet[1847]: I0113 20:41:24.602557 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-qvgsx" podStartSLOduration=8.316960129 podStartE2EDuration="16.602536316s" podCreationTimestamp="2025-01-13 20:41:08 +0000 UTC" firstStartedPulling="2025-01-13 20:41:16.152185873 +0000 UTC m=+22.457908953" lastFinishedPulling="2025-01-13 20:41:24.437762058 +0000 UTC m=+30.743485140" observedRunningTime="2025-01-13 20:41:24.6024099 +0000 UTC m=+30.908132991" watchObservedRunningTime="2025-01-13 20:41:24.602536316 +0000 UTC m=+30.908259407" Jan 13 20:41:24.603133 kubelet[1847]: I0113 20:41:24.602746 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nssfz" podStartSLOduration=6.602735774 podStartE2EDuration="6.602735774s" podCreationTimestamp="2025-01-13 20:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:41:21.592493157 +0000 UTC m=+27.898216249" watchObservedRunningTime="2025-01-13 20:41:24.602735774 +0000 UTC m=+30.908458910" Jan 13 20:41:25.143513 update_engine[1473]: I20250113 20:41:25.143014 1473 update_attempter.cc:509] Updating boot flags... Jan 13 20:41:25.161003 kubelet[1847]: E0113 20:41:25.158694 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:25.243989 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (3891) Jan 13 20:41:25.441032 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 44 scanned by (udev-worker) (3886) Jan 13 20:41:26.159495 kubelet[1847]: E0113 20:41:26.159437 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:26.468413 containerd[1481]: time="2025-01-13T20:41:26.467925487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:26.469675 containerd[1481]: time="2025-01-13T20:41:26.469600504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 13 20:41:26.471286 containerd[1481]: time="2025-01-13T20:41:26.471213661Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:26.474008 containerd[1481]: time="2025-01-13T20:41:26.473917447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:26.475053 containerd[1481]: time="2025-01-13T20:41:26.475010943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.036929781s" Jan 13 20:41:26.475521 containerd[1481]: time="2025-01-13T20:41:26.475059450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 13 20:41:26.476483 containerd[1481]: time="2025-01-13T20:41:26.476449161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:41:26.498617 containerd[1481]: time="2025-01-13T20:41:26.498350582Z" level=info msg="CreateContainer within sandbox \"d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:41:26.518032 containerd[1481]: time="2025-01-13T20:41:26.517924526Z" level=info msg="CreateContainer within sandbox \"d7c92da8fc58e8660ce832bc95e8e8f057a3608eef4014de66bd463871094986\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2770ef50acfbafdd83e6742242b10d592d78e00e884868f7efe81e9859626119\"" Jan 13 20:41:26.518824 containerd[1481]: time="2025-01-13T20:41:26.518627976Z" level=info msg="StartContainer for \"2770ef50acfbafdd83e6742242b10d592d78e00e884868f7efe81e9859626119\"" Jan 13 20:41:26.565257 systemd[1]: Started cri-containerd-2770ef50acfbafdd83e6742242b10d592d78e00e884868f7efe81e9859626119.scope - libcontainer container 2770ef50acfbafdd83e6742242b10d592d78e00e884868f7efe81e9859626119. Jan 13 20:41:26.630586 containerd[1481]: time="2025-01-13T20:41:26.630390604Z" level=info msg="StartContainer for \"2770ef50acfbafdd83e6742242b10d592d78e00e884868f7efe81e9859626119\" returns successfully" Jan 13 20:41:27.160133 kubelet[1847]: E0113 20:41:27.160063 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:27.606715 containerd[1481]: time="2025-01-13T20:41:27.606615629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:27.608160 containerd[1481]: time="2025-01-13T20:41:27.608084599Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 13 20:41:27.609795 containerd[1481]: time="2025-01-13T20:41:27.609695906Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:27.613101 containerd[1481]: time="2025-01-13T20:41:27.613060376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:27.614678 containerd[1481]: time="2025-01-13T20:41:27.613944248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.137448174s" Jan 13 20:41:27.614678 containerd[1481]: time="2025-01-13T20:41:27.614011604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 13 20:41:27.616448 containerd[1481]: time="2025-01-13T20:41:27.616395247Z" level=info msg="CreateContainer within sandbox \"e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:41:27.619854 kubelet[1847]: I0113 20:41:27.619418 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59c49674bf-9srrw" podStartSLOduration=3.93162761 podStartE2EDuration="11.619353405s" podCreationTimestamp="2025-01-13 20:41:16 +0000 UTC" firstStartedPulling="2025-01-13 20:41:18.788453024 +0000 UTC m=+25.094176105" lastFinishedPulling="2025-01-13 20:41:26.476178814 +0000 UTC m=+32.781901900" observedRunningTime="2025-01-13 20:41:27.617920523 +0000 UTC m=+33.923643613" watchObservedRunningTime="2025-01-13 20:41:27.619353405 +0000 UTC m=+33.925076495" Jan 13 20:41:27.638223 containerd[1481]: time="2025-01-13T20:41:27.638170084Z" level=info msg="CreateContainer within sandbox \"e226626af2cc467a4c617a56d04bc55853f48b353ab40482bd98a671d3e44ea7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3765829f892f6ddef8a29f2dae1e72d8d16a82cec8c04f2a6f18558df2e89cfe\"" Jan 13 20:41:27.638910 containerd[1481]: time="2025-01-13T20:41:27.638838198Z" level=info msg="StartContainer for \"3765829f892f6ddef8a29f2dae1e72d8d16a82cec8c04f2a6f18558df2e89cfe\"" Jan 13 20:41:27.686009 systemd[1]: run-containerd-runc-k8s.io-3765829f892f6ddef8a29f2dae1e72d8d16a82cec8c04f2a6f18558df2e89cfe-runc.trjUWE.mount: Deactivated successfully. Jan 13 20:41:27.698219 systemd[1]: Started cri-containerd-3765829f892f6ddef8a29f2dae1e72d8d16a82cec8c04f2a6f18558df2e89cfe.scope - libcontainer container 3765829f892f6ddef8a29f2dae1e72d8d16a82cec8c04f2a6f18558df2e89cfe. Jan 13 20:41:27.741939 containerd[1481]: time="2025-01-13T20:41:27.741875512Z" level=info msg="StartContainer for \"3765829f892f6ddef8a29f2dae1e72d8d16a82cec8c04f2a6f18558df2e89cfe\" returns successfully" Jan 13 20:41:28.161266 kubelet[1847]: E0113 20:41:28.161215 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:28.299654 kubelet[1847]: I0113 20:41:28.299618 1847 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:41:28.299654 kubelet[1847]: I0113 20:41:28.299660 1847 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:41:28.612347 kubelet[1847]: I0113 20:41:28.611550 1847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:41:29.162346 kubelet[1847]: E0113 20:41:29.162278 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:29.944405 kubelet[1847]: I0113 20:41:29.944322 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4fr6x" podStartSLOduration=23.390474736 podStartE2EDuration="34.944293329s" podCreationTimestamp="2025-01-13 20:40:55 +0000 UTC" firstStartedPulling="2025-01-13 20:41:16.061050497 +0000 UTC m=+22.366773572" lastFinishedPulling="2025-01-13 20:41:27.614869082 +0000 UTC m=+33.920592165" observedRunningTime="2025-01-13 20:41:28.652439255 +0000 UTC m=+34.958162346" watchObservedRunningTime="2025-01-13 20:41:29.944293329 +0000 UTC m=+36.250016409" Jan 13 20:41:29.952264 systemd[1]: Created slice kubepods-besteffort-pod13fa570f_f8d4_426c_afc2_93ba4508d831.slice - libcontainer container kubepods-besteffort-pod13fa570f_f8d4_426c_afc2_93ba4508d831.slice. Jan 13 20:41:30.105463 kubelet[1847]: I0113 20:41:30.105386 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5wqg\" (UniqueName: \"kubernetes.io/projected/13fa570f-f8d4-426c-afc2-93ba4508d831-kube-api-access-s5wqg\") pod \"nfs-server-provisioner-0\" (UID: \"13fa570f-f8d4-426c-afc2-93ba4508d831\") " pod="default/nfs-server-provisioner-0" Jan 13 20:41:30.105463 kubelet[1847]: I0113 20:41:30.105452 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/13fa570f-f8d4-426c-afc2-93ba4508d831-data\") pod \"nfs-server-provisioner-0\" (UID: \"13fa570f-f8d4-426c-afc2-93ba4508d831\") " pod="default/nfs-server-provisioner-0" Jan 13 20:41:30.163340 kubelet[1847]: E0113 20:41:30.163290 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:30.257720 containerd[1481]: time="2025-01-13T20:41:30.257121655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:13fa570f-f8d4-426c-afc2-93ba4508d831,Namespace:default,Attempt:0,}" Jan 13 20:41:30.434766 systemd-networkd[1409]: cali60e51b789ff: Link UP Jan 13 20:41:30.438332 systemd-networkd[1409]: cali60e51b789ff: Gained carrier Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.318 [INFO][4084] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.332 [INFO][4084] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.39-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 13fa570f-f8d4-426c-afc2-93ba4508d831 1357 0 2025-01-13 20:41:29 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.128.0.39 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.39-k8s-nfs--server--provisioner--0-" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.332 [INFO][4084] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.371 [INFO][4095] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" HandleID="k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Workload="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.383 [INFO][4095] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" HandleID="k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Workload="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003043b0), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.39", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:41:30.371118515 +0000 UTC"}, Hostname:"10.128.0.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.383 [INFO][4095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.383 [INFO][4095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.383 [INFO][4095] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.39' Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.386 [INFO][4095] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.390 [INFO][4095] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.395 [INFO][4095] ipam/ipam.go 489: Trying affinity for 192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.397 [INFO][4095] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.400 [INFO][4095] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.400 [INFO][4095] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.402 [INFO][4095] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9 Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.407 [INFO][4095] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.417 [INFO][4095] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.68/26] block=192.168.126.64/26 handle="k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.417 [INFO][4095] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.68/26] handle="k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" host="10.128.0.39" Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.417 [INFO][4095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:41:30.471436 containerd[1481]: 2025-01-13 20:41:30.417 [INFO][4095] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.68/26] IPv6=[] ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" HandleID="k8s-pod-network.6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Workload="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:41:30.474337 containerd[1481]: 2025-01-13 20:41:30.423 [INFO][4084] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"13fa570f-f8d4-426c-afc2-93ba4508d831", ResourceVersion:"1357", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 41, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:41:30.474337 containerd[1481]: 2025-01-13 20:41:30.423 [INFO][4084] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.68/32] ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:41:30.474337 containerd[1481]: 2025-01-13 20:41:30.423 [INFO][4084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:41:30.474337 containerd[1481]: 2025-01-13 20:41:30.437 [INFO][4084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:41:30.474661 containerd[1481]: 2025-01-13 20:41:30.439 [INFO][4084] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"13fa570f-f8d4-426c-afc2-93ba4508d831", ResourceVersion:"1357", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 41, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0e:38:4a:2b:2b:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:41:30.474661 containerd[1481]: 2025-01-13 20:41:30.463 [INFO][4084] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.128.0.39-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:41:30.525291 containerd[1481]: time="2025-01-13T20:41:30.525187195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:41:30.525291 containerd[1481]: time="2025-01-13T20:41:30.525252887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:41:30.525527 containerd[1481]: time="2025-01-13T20:41:30.525270897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:30.525527 containerd[1481]: time="2025-01-13T20:41:30.525473509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:41:30.570219 systemd[1]: Started cri-containerd-6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9.scope - libcontainer container 6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9. Jan 13 20:41:30.625565 containerd[1481]: time="2025-01-13T20:41:30.625465256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:13fa570f-f8d4-426c-afc2-93ba4508d831,Namespace:default,Attempt:0,} returns sandbox id \"6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9\"" Jan 13 20:41:30.627897 containerd[1481]: time="2025-01-13T20:41:30.627640833Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:41:31.164819 kubelet[1847]: E0113 20:41:31.164714 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:31.230886 systemd[1]: run-containerd-runc-k8s.io-6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9-runc.tS0569.mount: Deactivated successfully. Jan 13 20:41:32.051239 systemd-networkd[1409]: cali60e51b789ff: Gained IPv6LL Jan 13 20:41:32.165807 kubelet[1847]: E0113 20:41:32.165764 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:33.160063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2455669692.mount: Deactivated successfully. Jan 13 20:41:33.166732 kubelet[1847]: E0113 20:41:33.166640 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:34.166837 kubelet[1847]: E0113 20:41:34.166790 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:34.473998 ntpd[1452]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%6]:123 Jan 13 20:41:34.474916 ntpd[1452]: 13 Jan 20:41:34 ntpd[1452]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%6]:123 Jan 13 20:41:35.134784 kubelet[1847]: E0113 20:41:35.134733 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:35.169453 kubelet[1847]: E0113 20:41:35.169406 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:35.724666 containerd[1481]: time="2025-01-13T20:41:35.724598845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:35.726066 containerd[1481]: time="2025-01-13T20:41:35.726000004Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91045236" Jan 13 20:41:35.727304 containerd[1481]: time="2025-01-13T20:41:35.727217085Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:35.737988 containerd[1481]: time="2025-01-13T20:41:35.737825413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:41:35.740149 containerd[1481]: time="2025-01-13T20:41:35.739922870Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.112236482s" Jan 13 20:41:35.740149 containerd[1481]: time="2025-01-13T20:41:35.739999322Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 13 20:41:35.744579 containerd[1481]: time="2025-01-13T20:41:35.744543056Z" level=info msg="CreateContainer within sandbox \"6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:41:35.763066 containerd[1481]: time="2025-01-13T20:41:35.762301000Z" level=info msg="CreateContainer within sandbox \"6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189\"" Jan 13 20:41:35.766125 containerd[1481]: time="2025-01-13T20:41:35.765991825Z" level=info msg="StartContainer for \"5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189\"" Jan 13 20:41:35.821211 systemd[1]: Started cri-containerd-5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189.scope - libcontainer container 5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189. Jan 13 20:41:35.869549 containerd[1481]: time="2025-01-13T20:41:35.869473202Z" level=info msg="StartContainer for \"5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189\" returns successfully" Jan 13 20:41:36.169664 kubelet[1847]: E0113 20:41:36.169586 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:37.170654 kubelet[1847]: E0113 20:41:37.170595 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:38.170996 kubelet[1847]: E0113 20:41:38.170902 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:39.172017 kubelet[1847]: E0113 20:41:39.171922 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:40.172610 kubelet[1847]: E0113 20:41:40.172529 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:41.173525 kubelet[1847]: E0113 20:41:41.173447 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:42.174313 kubelet[1847]: E0113 20:41:42.174241 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:43.174926 kubelet[1847]: E0113 20:41:43.174844 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:43.791204 kubelet[1847]: I0113 20:41:43.790444 1847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:41:44.074233 kubelet[1847]: I0113 20:41:44.074051 1847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:41:44.091599 kubelet[1847]: I0113 20:41:44.091272 1847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=9.976007267 podStartE2EDuration="15.09123691s" podCreationTimestamp="2025-01-13 20:41:29 +0000 UTC" firstStartedPulling="2025-01-13 20:41:30.627321923 +0000 UTC m=+36.933045011" lastFinishedPulling="2025-01-13 20:41:35.742551574 +0000 UTC m=+42.048274654" observedRunningTime="2025-01-13 20:41:36.663884203 +0000 UTC m=+42.969607293" watchObservedRunningTime="2025-01-13 20:41:44.09123691 +0000 UTC m=+50.396960003" Jan 13 20:41:44.175401 kubelet[1847]: E0113 20:41:44.175328 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:44.732996 kernel: bpftool[4611]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:41:45.017988 systemd-networkd[1409]: vxlan.calico: Link UP Jan 13 20:41:45.018003 systemd-networkd[1409]: vxlan.calico: Gained carrier Jan 13 20:41:45.176540 kubelet[1847]: E0113 20:41:45.176476 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:46.177350 kubelet[1847]: E0113 20:41:46.177280 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:47.026430 systemd-networkd[1409]: vxlan.calico: Gained IPv6LL Jan 13 20:41:47.177496 kubelet[1847]: E0113 20:41:47.177430 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:48.177853 kubelet[1847]: E0113 20:41:48.177775 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:49.178061 kubelet[1847]: E0113 20:41:49.177972 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:49.473927 ntpd[1452]: Listen normally on 12 vxlan.calico 192.168.126.64:123 Jan 13 20:41:49.474565 ntpd[1452]: 13 Jan 20:41:49 ntpd[1452]: Listen normally on 12 vxlan.calico 192.168.126.64:123 Jan 13 20:41:49.474565 ntpd[1452]: 13 Jan 20:41:49 ntpd[1452]: Listen normally on 13 vxlan.calico [fe80::643a:5fff:fe53:2a80%7]:123 Jan 13 20:41:49.474121 ntpd[1452]: Listen normally on 13 vxlan.calico [fe80::643a:5fff:fe53:2a80%7]:123 Jan 13 20:41:50.178465 kubelet[1847]: E0113 20:41:50.178397 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:51.179128 kubelet[1847]: E0113 20:41:51.179049 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:52.179512 kubelet[1847]: E0113 20:41:52.179438 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:53.180154 kubelet[1847]: E0113 20:41:53.180090 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:54.181320 kubelet[1847]: E0113 20:41:54.181246 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:55.135230 kubelet[1847]: E0113 20:41:55.135162 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:55.179599 containerd[1481]: time="2025-01-13T20:41:55.179539211Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:55.180277 containerd[1481]: time="2025-01-13T20:41:55.179717647Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:55.180277 containerd[1481]: time="2025-01-13T20:41:55.179739511Z" level=info msg="StopPodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:55.180427 containerd[1481]: time="2025-01-13T20:41:55.180376972Z" level=info msg="RemovePodSandbox for \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:55.180427 containerd[1481]: time="2025-01-13T20:41:55.180414798Z" level=info msg="Forcibly stopping sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\"" Jan 13 20:41:55.180580 containerd[1481]: time="2025-01-13T20:41:55.180517258Z" level=info msg="TearDown network for sandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" successfully" Jan 13 20:41:55.182101 kubelet[1847]: E0113 20:41:55.182058 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:55.185103 containerd[1481]: time="2025-01-13T20:41:55.185056078Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.185325 containerd[1481]: time="2025-01-13T20:41:55.185139806Z" level=info msg="RemovePodSandbox \"83f12c164215052f1b8181e1b7610cda9230479488f5e405b76c00e3ca00a275\" returns successfully" Jan 13 20:41:55.185762 containerd[1481]: time="2025-01-13T20:41:55.185689987Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:55.186041 containerd[1481]: time="2025-01-13T20:41:55.185838713Z" level=info msg="TearDown network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" successfully" Jan 13 20:41:55.186041 containerd[1481]: time="2025-01-13T20:41:55.185858597Z" level=info msg="StopPodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" returns successfully" Jan 13 20:41:55.186896 containerd[1481]: time="2025-01-13T20:41:55.186397303Z" level=info msg="RemovePodSandbox for \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:55.186896 containerd[1481]: time="2025-01-13T20:41:55.186433249Z" level=info msg="Forcibly stopping sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\"" Jan 13 20:41:55.186896 containerd[1481]: time="2025-01-13T20:41:55.186540548Z" level=info msg="TearDown network for sandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" successfully" Jan 13 20:41:55.190584 containerd[1481]: time="2025-01-13T20:41:55.190531400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.190808 containerd[1481]: time="2025-01-13T20:41:55.190602019Z" level=info msg="RemovePodSandbox \"31e4ca12ede1cef497723b43dd3e25aee91419822906a8f0eeb161db35a9da94\" returns successfully" Jan 13 20:41:55.191130 containerd[1481]: time="2025-01-13T20:41:55.191012517Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" Jan 13 20:41:55.191130 containerd[1481]: time="2025-01-13T20:41:55.191132952Z" level=info msg="TearDown network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" successfully" Jan 13 20:41:55.191130 containerd[1481]: time="2025-01-13T20:41:55.191155545Z" level=info msg="StopPodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" returns successfully" Jan 13 20:41:55.191562 containerd[1481]: time="2025-01-13T20:41:55.191529657Z" level=info msg="RemovePodSandbox for \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" Jan 13 20:41:55.191649 containerd[1481]: time="2025-01-13T20:41:55.191567294Z" level=info msg="Forcibly stopping sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\"" Jan 13 20:41:55.191716 containerd[1481]: time="2025-01-13T20:41:55.191665057Z" level=info msg="TearDown network for sandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" successfully" Jan 13 20:41:55.195639 containerd[1481]: time="2025-01-13T20:41:55.195588772Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.195749 containerd[1481]: time="2025-01-13T20:41:55.195644775Z" level=info msg="RemovePodSandbox \"e191ad3d23f089035187943231b9efe8b59c89085f837e4203ed90bee16b30e5\" returns successfully" Jan 13 20:41:55.196305 containerd[1481]: time="2025-01-13T20:41:55.196089328Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\"" Jan 13 20:41:55.196305 containerd[1481]: time="2025-01-13T20:41:55.196210806Z" level=info msg="TearDown network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" successfully" Jan 13 20:41:55.196305 containerd[1481]: time="2025-01-13T20:41:55.196229572Z" level=info msg="StopPodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" returns successfully" Jan 13 20:41:55.196627 containerd[1481]: time="2025-01-13T20:41:55.196594363Z" level=info msg="RemovePodSandbox for \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\"" Jan 13 20:41:55.196627 containerd[1481]: time="2025-01-13T20:41:55.196633649Z" level=info msg="Forcibly stopping sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\"" Jan 13 20:41:55.196971 containerd[1481]: time="2025-01-13T20:41:55.196742311Z" level=info msg="TearDown network for sandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" successfully" Jan 13 20:41:55.200762 containerd[1481]: time="2025-01-13T20:41:55.200706789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.200883 containerd[1481]: time="2025-01-13T20:41:55.200775405Z" level=info msg="RemovePodSandbox \"02cb27e6f0a97de4eaab2c849eb4743af18a0f5e1a2d34a9542f52f7488cc8b0\" returns successfully" Jan 13 20:41:55.201550 containerd[1481]: time="2025-01-13T20:41:55.201334157Z" level=info msg="StopPodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\"" Jan 13 20:41:55.201550 containerd[1481]: time="2025-01-13T20:41:55.201457940Z" level=info msg="TearDown network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" successfully" Jan 13 20:41:55.201550 containerd[1481]: time="2025-01-13T20:41:55.201474223Z" level=info msg="StopPodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" returns successfully" Jan 13 20:41:55.202777 containerd[1481]: time="2025-01-13T20:41:55.202486357Z" level=info msg="RemovePodSandbox for \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\"" Jan 13 20:41:55.202777 containerd[1481]: time="2025-01-13T20:41:55.202548780Z" level=info msg="Forcibly stopping sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\"" Jan 13 20:41:55.203197 containerd[1481]: time="2025-01-13T20:41:55.202667416Z" level=info msg="TearDown network for sandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" successfully" Jan 13 20:41:55.207337 containerd[1481]: time="2025-01-13T20:41:55.207294046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.207498 containerd[1481]: time="2025-01-13T20:41:55.207364714Z" level=info msg="RemovePodSandbox \"e83ec508d625e6780536f84c047644381c08f66246696fea05db35bd662f42dc\" returns successfully" Jan 13 20:41:55.207853 containerd[1481]: time="2025-01-13T20:41:55.207805161Z" level=info msg="StopPodSandbox for \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\"" Jan 13 20:41:55.208031 containerd[1481]: time="2025-01-13T20:41:55.207923411Z" level=info msg="TearDown network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\" successfully" Jan 13 20:41:55.208031 containerd[1481]: time="2025-01-13T20:41:55.207943766Z" level=info msg="StopPodSandbox for \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\" returns successfully" Jan 13 20:41:55.208485 containerd[1481]: time="2025-01-13T20:41:55.208460186Z" level=info msg="RemovePodSandbox for \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\"" Jan 13 20:41:55.208610 containerd[1481]: time="2025-01-13T20:41:55.208520826Z" level=info msg="Forcibly stopping sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\"" Jan 13 20:41:55.208671 containerd[1481]: time="2025-01-13T20:41:55.208627755Z" level=info msg="TearDown network for sandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\" successfully" Jan 13 20:41:55.212499 containerd[1481]: time="2025-01-13T20:41:55.212428744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.212499 containerd[1481]: time="2025-01-13T20:41:55.212486787Z" level=info msg="RemovePodSandbox \"32cb2001a76a1c5ca401974a294cddddae3f69d4771a8802ee2509471c937652\" returns successfully" Jan 13 20:41:55.212969 containerd[1481]: time="2025-01-13T20:41:55.212910374Z" level=info msg="StopPodSandbox for \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\"" Jan 13 20:41:55.213065 containerd[1481]: time="2025-01-13T20:41:55.213042809Z" level=info msg="TearDown network for sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\" successfully" Jan 13 20:41:55.213132 containerd[1481]: time="2025-01-13T20:41:55.213061078Z" level=info msg="StopPodSandbox for \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\" returns successfully" Jan 13 20:41:55.213613 containerd[1481]: time="2025-01-13T20:41:55.213476113Z" level=info msg="RemovePodSandbox for \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\"" Jan 13 20:41:55.213613 containerd[1481]: time="2025-01-13T20:41:55.213509229Z" level=info msg="Forcibly stopping sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\"" Jan 13 20:41:55.213815 containerd[1481]: time="2025-01-13T20:41:55.213604250Z" level=info msg="TearDown network for sandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\" successfully" Jan 13 20:41:55.217398 containerd[1481]: time="2025-01-13T20:41:55.217357240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.217701 containerd[1481]: time="2025-01-13T20:41:55.217417346Z" level=info msg="RemovePodSandbox \"e9557cb80a301f8ae7592ea68b6f365444caff4252e6307ff5f8cb90bf5e2c46\" returns successfully" Jan 13 20:41:55.218061 containerd[1481]: time="2025-01-13T20:41:55.217827797Z" level=info msg="StopPodSandbox for \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\"" Jan 13 20:41:55.218061 containerd[1481]: time="2025-01-13T20:41:55.217918258Z" level=info msg="TearDown network for sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" successfully" Jan 13 20:41:55.218061 containerd[1481]: time="2025-01-13T20:41:55.217936182Z" level=info msg="StopPodSandbox for \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" returns successfully" Jan 13 20:41:55.218534 containerd[1481]: time="2025-01-13T20:41:55.218478530Z" level=info msg="RemovePodSandbox for \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\"" Jan 13 20:41:55.218534 containerd[1481]: time="2025-01-13T20:41:55.218514191Z" level=info msg="Forcibly stopping sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\"" Jan 13 20:41:55.218660 containerd[1481]: time="2025-01-13T20:41:55.218588902Z" level=info msg="TearDown network for sandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" successfully" Jan 13 20:41:55.223110 containerd[1481]: time="2025-01-13T20:41:55.223045460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.223110 containerd[1481]: time="2025-01-13T20:41:55.223105452Z" level=info msg="RemovePodSandbox \"a548a6ed8acb2ed556e3a0994339d985bc57312c340ecda2efae6b7128b998f6\" returns successfully" Jan 13 20:41:55.223686 containerd[1481]: time="2025-01-13T20:41:55.223621524Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:55.223823 containerd[1481]: time="2025-01-13T20:41:55.223778841Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:55.223823 containerd[1481]: time="2025-01-13T20:41:55.223806755Z" level=info msg="StopPodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:55.224287 containerd[1481]: time="2025-01-13T20:41:55.224214200Z" level=info msg="RemovePodSandbox for \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:55.224423 containerd[1481]: time="2025-01-13T20:41:55.224289801Z" level=info msg="Forcibly stopping sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\"" Jan 13 20:41:55.224513 containerd[1481]: time="2025-01-13T20:41:55.224394074Z" level=info msg="TearDown network for sandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" successfully" Jan 13 20:41:55.228290 containerd[1481]: time="2025-01-13T20:41:55.228240838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.228467 containerd[1481]: time="2025-01-13T20:41:55.228302587Z" level=info msg="RemovePodSandbox \"177faf089efd9c460f2e707ccd03f15b7d4f78cc4ff51706514665903472f5e9\" returns successfully" Jan 13 20:41:55.228813 containerd[1481]: time="2025-01-13T20:41:55.228752758Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:55.228913 containerd[1481]: time="2025-01-13T20:41:55.228871902Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:55.228913 containerd[1481]: time="2025-01-13T20:41:55.228890273Z" level=info msg="StopPodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:55.229438 containerd[1481]: time="2025-01-13T20:41:55.229408360Z" level=info msg="RemovePodSandbox for \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:55.229536 containerd[1481]: time="2025-01-13T20:41:55.229444598Z" level=info msg="Forcibly stopping sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\"" Jan 13 20:41:55.229596 containerd[1481]: time="2025-01-13T20:41:55.229539255Z" level=info msg="TearDown network for sandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" successfully" Jan 13 20:41:55.232987 containerd[1481]: time="2025-01-13T20:41:55.232919749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.233100 containerd[1481]: time="2025-01-13T20:41:55.233004881Z" level=info msg="RemovePodSandbox \"9cdc7ddf17ee348afb9100bcf4a51c5ee56c7a8d6d85b2ad6a5b966d65412d33\" returns successfully" Jan 13 20:41:55.233592 containerd[1481]: time="2025-01-13T20:41:55.233423314Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:55.233592 containerd[1481]: time="2025-01-13T20:41:55.233549350Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:55.233592 containerd[1481]: time="2025-01-13T20:41:55.233569154Z" level=info msg="StopPodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:55.234189 containerd[1481]: time="2025-01-13T20:41:55.233904730Z" level=info msg="RemovePodSandbox for \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:55.234189 containerd[1481]: time="2025-01-13T20:41:55.233932417Z" level=info msg="Forcibly stopping sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\"" Jan 13 20:41:55.234189 containerd[1481]: time="2025-01-13T20:41:55.234064015Z" level=info msg="TearDown network for sandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" successfully" Jan 13 20:41:55.237752 containerd[1481]: time="2025-01-13T20:41:55.237670958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.237848 containerd[1481]: time="2025-01-13T20:41:55.237760441Z" level=info msg="RemovePodSandbox \"c4c2a0bb72884ac1648c6c3c8000bf80383870b1cacc727495aac6cd8eedea3c\" returns successfully" Jan 13 20:41:55.238285 containerd[1481]: time="2025-01-13T20:41:55.238251304Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:55.238410 containerd[1481]: time="2025-01-13T20:41:55.238392684Z" level=info msg="TearDown network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" successfully" Jan 13 20:41:55.238475 containerd[1481]: time="2025-01-13T20:41:55.238412273Z" level=info msg="StopPodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" returns successfully" Jan 13 20:41:55.238901 containerd[1481]: time="2025-01-13T20:41:55.238769384Z" level=info msg="RemovePodSandbox for \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:55.238901 containerd[1481]: time="2025-01-13T20:41:55.238802715Z" level=info msg="Forcibly stopping sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\"" Jan 13 20:41:55.239141 containerd[1481]: time="2025-01-13T20:41:55.238906798Z" level=info msg="TearDown network for sandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" successfully" Jan 13 20:41:55.242403 containerd[1481]: time="2025-01-13T20:41:55.242344892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.242680 containerd[1481]: time="2025-01-13T20:41:55.242404963Z" level=info msg="RemovePodSandbox \"8378de0701bcb3ba61cf65f5f477735aaa8daee1f982e782229f6c7dcaa644a1\" returns successfully" Jan 13 20:41:55.242931 containerd[1481]: time="2025-01-13T20:41:55.242900763Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" Jan 13 20:41:55.243128 containerd[1481]: time="2025-01-13T20:41:55.243083860Z" level=info msg="TearDown network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" successfully" Jan 13 20:41:55.243128 containerd[1481]: time="2025-01-13T20:41:55.243111595Z" level=info msg="StopPodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" returns successfully" Jan 13 20:41:55.243525 containerd[1481]: time="2025-01-13T20:41:55.243477050Z" level=info msg="RemovePodSandbox for \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" Jan 13 20:41:55.243525 containerd[1481]: time="2025-01-13T20:41:55.243511556Z" level=info msg="Forcibly stopping sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\"" Jan 13 20:41:55.243669 containerd[1481]: time="2025-01-13T20:41:55.243609540Z" level=info msg="TearDown network for sandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" successfully" Jan 13 20:41:55.247120 containerd[1481]: time="2025-01-13T20:41:55.247057542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.247262 containerd[1481]: time="2025-01-13T20:41:55.247122175Z" level=info msg="RemovePodSandbox \"602c1ebe34e68130699eecaaf5d9abbab42cd5d7676b03300010524bc66725e8\" returns successfully" Jan 13 20:41:55.247591 containerd[1481]: time="2025-01-13T20:41:55.247526749Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\"" Jan 13 20:41:55.247731 containerd[1481]: time="2025-01-13T20:41:55.247649074Z" level=info msg="TearDown network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" successfully" Jan 13 20:41:55.247731 containerd[1481]: time="2025-01-13T20:41:55.247668282Z" level=info msg="StopPodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" returns successfully" Jan 13 20:41:55.248101 containerd[1481]: time="2025-01-13T20:41:55.248073047Z" level=info msg="RemovePodSandbox for \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\"" Jan 13 20:41:55.248165 containerd[1481]: time="2025-01-13T20:41:55.248109696Z" level=info msg="Forcibly stopping sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\"" Jan 13 20:41:55.248343 containerd[1481]: time="2025-01-13T20:41:55.248206448Z" level=info msg="TearDown network for sandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" successfully" Jan 13 20:41:55.251519 containerd[1481]: time="2025-01-13T20:41:55.251478328Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.251681 containerd[1481]: time="2025-01-13T20:41:55.251537089Z" level=info msg="RemovePodSandbox \"8a0eeeaa975432271a62fc8f766645198b5cc41b767caa67dd855edd64d67cda\" returns successfully" Jan 13 20:41:55.251997 containerd[1481]: time="2025-01-13T20:41:55.251948357Z" level=info msg="StopPodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\"" Jan 13 20:41:55.252182 containerd[1481]: time="2025-01-13T20:41:55.252104613Z" level=info msg="TearDown network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" successfully" Jan 13 20:41:55.252182 containerd[1481]: time="2025-01-13T20:41:55.252129951Z" level=info msg="StopPodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" returns successfully" Jan 13 20:41:55.252530 containerd[1481]: time="2025-01-13T20:41:55.252469179Z" level=info msg="RemovePodSandbox for \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\"" Jan 13 20:41:55.252530 containerd[1481]: time="2025-01-13T20:41:55.252501144Z" level=info msg="Forcibly stopping sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\"" Jan 13 20:41:55.252668 containerd[1481]: time="2025-01-13T20:41:55.252596333Z" level=info msg="TearDown network for sandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" successfully" Jan 13 20:41:55.256055 containerd[1481]: time="2025-01-13T20:41:55.255994376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.256206 containerd[1481]: time="2025-01-13T20:41:55.256059049Z" level=info msg="RemovePodSandbox \"214664ba1f2575381c856d62599c93a6d9fcb26cd1c1ec77068140f1c491e2ad\" returns successfully" Jan 13 20:41:55.256485 containerd[1481]: time="2025-01-13T20:41:55.256436010Z" level=info msg="StopPodSandbox for \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\"" Jan 13 20:41:55.256593 containerd[1481]: time="2025-01-13T20:41:55.256551144Z" level=info msg="TearDown network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\" successfully" Jan 13 20:41:55.256593 containerd[1481]: time="2025-01-13T20:41:55.256573140Z" level=info msg="StopPodSandbox for \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\" returns successfully" Jan 13 20:41:55.257066 containerd[1481]: time="2025-01-13T20:41:55.257011965Z" level=info msg="RemovePodSandbox for \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\"" Jan 13 20:41:55.257158 containerd[1481]: time="2025-01-13T20:41:55.257071623Z" level=info msg="Forcibly stopping sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\"" Jan 13 20:41:55.257211 containerd[1481]: time="2025-01-13T20:41:55.257165273Z" level=info msg="TearDown network for sandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\" successfully" Jan 13 20:41:55.260574 containerd[1481]: time="2025-01-13T20:41:55.260519501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.260675 containerd[1481]: time="2025-01-13T20:41:55.260579816Z" level=info msg="RemovePodSandbox \"a612c6d1d26018dbe9fd56ddb2cd865b3ba1d76a35580c78605705ceed212958\" returns successfully" Jan 13 20:41:55.261089 containerd[1481]: time="2025-01-13T20:41:55.261047961Z" level=info msg="StopPodSandbox for \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\"" Jan 13 20:41:55.261216 containerd[1481]: time="2025-01-13T20:41:55.261178600Z" level=info msg="TearDown network for sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\" successfully" Jan 13 20:41:55.261216 containerd[1481]: time="2025-01-13T20:41:55.261200532Z" level=info msg="StopPodSandbox for \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\" returns successfully" Jan 13 20:41:55.261588 containerd[1481]: time="2025-01-13T20:41:55.261556894Z" level=info msg="RemovePodSandbox for \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\"" Jan 13 20:41:55.261680 containerd[1481]: time="2025-01-13T20:41:55.261594134Z" level=info msg="Forcibly stopping sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\"" Jan 13 20:41:55.261906 containerd[1481]: time="2025-01-13T20:41:55.261697775Z" level=info msg="TearDown network for sandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\" successfully" Jan 13 20:41:55.265169 containerd[1481]: time="2025-01-13T20:41:55.265126528Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:41:55.265273 containerd[1481]: time="2025-01-13T20:41:55.265184661Z" level=info msg="RemovePodSandbox \"f62a20d9d3d8059df917645d746cc00d99315f174bf096ca9a48f5db3ed05428\" returns successfully" Jan 13 20:41:56.182976 kubelet[1847]: E0113 20:41:56.182905 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:57.183742 kubelet[1847]: E0113 20:41:57.183678 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:58.184467 kubelet[1847]: E0113 20:41:58.184395 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:41:59.185510 kubelet[1847]: E0113 20:41:59.185426 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:00.185980 kubelet[1847]: E0113 20:42:00.185853 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:01.186574 kubelet[1847]: E0113 20:42:01.186509 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:02.187426 kubelet[1847]: E0113 20:42:02.187347 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:03.188132 kubelet[1847]: E0113 20:42:03.188055 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:04.189165 kubelet[1847]: E0113 20:42:04.189083 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:05.189915 kubelet[1847]: E0113 20:42:05.189828 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:06.190212 kubelet[1847]: E0113 20:42:06.190050 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:07.191350 kubelet[1847]: E0113 20:42:07.191255 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:08.192471 kubelet[1847]: E0113 20:42:08.192402 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:09.193566 kubelet[1847]: E0113 20:42:09.193498 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:10.194504 kubelet[1847]: E0113 20:42:10.194435 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:10.999776 systemd[1]: cri-containerd-5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189.scope: Deactivated successfully. Jan 13 20:42:11.035465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189-rootfs.mount: Deactivated successfully. Jan 13 20:42:11.048545 containerd[1481]: time="2025-01-13T20:42:11.048448794Z" level=info msg="shim disconnected" id=5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189 namespace=k8s.io Jan 13 20:42:11.048545 containerd[1481]: time="2025-01-13T20:42:11.048542089Z" level=warning msg="cleaning up after shim disconnected" id=5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189 namespace=k8s.io Jan 13 20:42:11.049302 containerd[1481]: time="2025-01-13T20:42:11.048557456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:42:11.194935 kubelet[1847]: E0113 20:42:11.194848 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:11.738142 kubelet[1847]: I0113 20:42:11.737683 1847 scope.go:117] "RemoveContainer" containerID="5b9cb25657f84efbc4d65641a23b6e8a4147c9793926289ee7f89c7cab0f9189" Jan 13 20:42:11.740809 containerd[1481]: time="2025-01-13T20:42:11.740753556Z" level=info msg="CreateContainer within sandbox \"6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:1,}" Jan 13 20:42:11.765726 containerd[1481]: time="2025-01-13T20:42:11.765075061Z" level=info msg="CreateContainer within sandbox \"6b98fdc708af546fe6d7587cc335bd43ff554ed349fe6e1874fcfbdb139ad9d9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:1,} returns container id \"fa00b50fdcfe0d0224e5952625f4167d0be83565bf9607d5fd7318e45ebed24e\"" Jan 13 20:42:11.769597 containerd[1481]: time="2025-01-13T20:42:11.768180317Z" level=info msg="StartContainer for \"fa00b50fdcfe0d0224e5952625f4167d0be83565bf9607d5fd7318e45ebed24e\"" Jan 13 20:42:11.847991 systemd[1]: Started cri-containerd-fa00b50fdcfe0d0224e5952625f4167d0be83565bf9607d5fd7318e45ebed24e.scope - libcontainer container fa00b50fdcfe0d0224e5952625f4167d0be83565bf9607d5fd7318e45ebed24e. Jan 13 20:42:11.900681 containerd[1481]: time="2025-01-13T20:42:11.900607280Z" level=info msg="StartContainer for \"fa00b50fdcfe0d0224e5952625f4167d0be83565bf9607d5fd7318e45ebed24e\" returns successfully" Jan 13 20:42:12.195176 kubelet[1847]: E0113 20:42:12.195078 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:13.196091 kubelet[1847]: E0113 20:42:13.196012 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:14.196456 kubelet[1847]: E0113 20:42:14.196375 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:15.134972 kubelet[1847]: E0113 20:42:15.134904 1847 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:15.197328 kubelet[1847]: E0113 20:42:15.197268 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:16.198357 kubelet[1847]: E0113 20:42:16.198285 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:17.199378 kubelet[1847]: E0113 20:42:17.199287 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:18.199850 kubelet[1847]: E0113 20:42:18.199774 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:19.200879 kubelet[1847]: E0113 20:42:19.200778 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:20.201552 kubelet[1847]: E0113 20:42:20.201465 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:21.202485 kubelet[1847]: E0113 20:42:21.202405 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:21.354512 systemd[1]: Created slice kubepods-besteffort-podfbca44a1_562c_4b36_b9ce_8bec9b03726f.slice - libcontainer container kubepods-besteffort-podfbca44a1_562c_4b36_b9ce_8bec9b03726f.slice. Jan 13 20:42:21.515615 kubelet[1847]: I0113 20:42:21.515545 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm76f\" (UniqueName: \"kubernetes.io/projected/fbca44a1-562c-4b36-b9ce-8bec9b03726f-kube-api-access-cm76f\") pod \"test-pod-1\" (UID: \"fbca44a1-562c-4b36-b9ce-8bec9b03726f\") " pod="default/test-pod-1" Jan 13 20:42:21.516057 kubelet[1847]: I0113 20:42:21.515684 1847 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e5155be4-a5bd-4251-b24a-afd60a4af118\" (UniqueName: \"kubernetes.io/nfs/fbca44a1-562c-4b36-b9ce-8bec9b03726f-pvc-e5155be4-a5bd-4251-b24a-afd60a4af118\") pod \"test-pod-1\" (UID: \"fbca44a1-562c-4b36-b9ce-8bec9b03726f\") " pod="default/test-pod-1" Jan 13 20:42:21.661273 kernel: FS-Cache: Loaded Jan 13 20:42:21.743685 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:42:21.743905 kernel: RPC: Registered udp transport module. Jan 13 20:42:21.744009 kernel: RPC: Registered tcp transport module. Jan 13 20:42:21.748568 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:42:21.754131 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:42:22.037186 kernel: NFS: Registering the id_resolver key type Jan 13 20:42:22.037381 kernel: Key type id_resolver registered Jan 13 20:42:22.037438 kernel: Key type id_legacy registered Jan 13 20:42:22.080188 nfsidmap[4904]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jan 13 20:42:22.091100 nfsidmap[4905]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'c.flatcar-212911.internal' Jan 13 20:42:22.203569 kubelet[1847]: E0113 20:42:22.203500 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:22.262481 containerd[1481]: time="2025-01-13T20:42:22.262399310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fbca44a1-562c-4b36-b9ce-8bec9b03726f,Namespace:default,Attempt:0,}" Jan 13 20:42:22.423266 systemd-networkd[1409]: cali5ec59c6bf6e: Link UP Jan 13 20:42:22.425019 systemd-networkd[1409]: cali5ec59c6bf6e: Gained carrier Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.328 [INFO][4909] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.128.0.39-k8s-test--pod--1-eth0 default fbca44a1-562c-4b36-b9ce-8bec9b03726f 1548 0 2025-01-13 20:41:30 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.128.0.39 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.39-k8s-test--pod--1-" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.328 [INFO][4909] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.39-k8s-test--pod--1-eth0" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.363 [INFO][4917] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" HandleID="k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Workload="10.128.0.39-k8s-test--pod--1-eth0" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.377 [INFO][4917] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" HandleID="k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Workload="10.128.0.39-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ecd30), Attrs:map[string]string{"namespace":"default", "node":"10.128.0.39", "pod":"test-pod-1", "timestamp":"2025-01-13 20:42:22.363601196 +0000 UTC"}, Hostname:"10.128.0.39", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.377 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.377 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.377 [INFO][4917] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.128.0.39' Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.379 [INFO][4917] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.384 [INFO][4917] ipam/ipam.go 372: Looking up existing affinities for host host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.390 [INFO][4917] ipam/ipam.go 489: Trying affinity for 192.168.126.64/26 host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.394 [INFO][4917] ipam/ipam.go 155: Attempting to load block cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.397 [INFO][4917] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.397 [INFO][4917] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.399 [INFO][4917] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3 Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.404 [INFO][4917] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.415 [INFO][4917] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.126.69/26] block=192.168.126.64/26 handle="k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.416 [INFO][4917] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.69/26] handle="k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" host="10.128.0.39" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.416 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.416 [INFO][4917] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.126.69/26] IPv6=[] ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" HandleID="k8s-pod-network.0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Workload="10.128.0.39-k8s-test--pod--1-eth0" Jan 13 20:42:22.442933 containerd[1481]: 2025-01-13 20:42:22.418 [INFO][4909] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.39-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"fbca44a1-562c-4b36-b9ce-8bec9b03726f", ResourceVersion:"1548", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 41, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:42:22.446631 containerd[1481]: 2025-01-13 20:42:22.419 [INFO][4909] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.126.69/32] ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.39-k8s-test--pod--1-eth0" Jan 13 20:42:22.446631 containerd[1481]: 2025-01-13 20:42:22.419 [INFO][4909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.39-k8s-test--pod--1-eth0" Jan 13 20:42:22.446631 containerd[1481]: 2025-01-13 20:42:22.424 [INFO][4909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.39-k8s-test--pod--1-eth0" Jan 13 20:42:22.446631 containerd[1481]: 2025-01-13 20:42:22.424 [INFO][4909] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.39-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.39-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"fbca44a1-562c-4b36-b9ce-8bec9b03726f", ResourceVersion:"1548", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 41, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.128.0.39", ContainerID:"0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"a6:ee:d5:26:fa:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:42:22.446631 containerd[1481]: 2025-01-13 20:42:22.440 [INFO][4909] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.128.0.39-k8s-test--pod--1-eth0" Jan 13 20:42:22.479541 containerd[1481]: time="2025-01-13T20:42:22.479088980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:42:22.479541 containerd[1481]: time="2025-01-13T20:42:22.479250281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:42:22.479541 containerd[1481]: time="2025-01-13T20:42:22.479324510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:22.479888 containerd[1481]: time="2025-01-13T20:42:22.479732544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:42:22.510198 systemd[1]: Started cri-containerd-0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3.scope - libcontainer container 0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3. Jan 13 20:42:22.564603 containerd[1481]: time="2025-01-13T20:42:22.564552549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fbca44a1-562c-4b36-b9ce-8bec9b03726f,Namespace:default,Attempt:0,} returns sandbox id \"0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3\"" Jan 13 20:42:22.567361 containerd[1481]: time="2025-01-13T20:42:22.567323838Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:42:22.777855 containerd[1481]: time="2025-01-13T20:42:22.777788220Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:42:22.779058 containerd[1481]: time="2025-01-13T20:42:22.778989015Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:42:22.782609 containerd[1481]: time="2025-01-13T20:42:22.782542829Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"71035896\" in 215.154155ms" Jan 13 20:42:22.782609 containerd[1481]: time="2025-01-13T20:42:22.782588139Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:29ef6eaebfc53650f3a4609edbf9d35e866f56b2c5e01d32d93439031b300f0b\"" Jan 13 20:42:22.785247 containerd[1481]: time="2025-01-13T20:42:22.785204244Z" level=info msg="CreateContainer within sandbox \"0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:42:22.805289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035083308.mount: Deactivated successfully. Jan 13 20:42:22.809039 containerd[1481]: time="2025-01-13T20:42:22.808984685Z" level=info msg="CreateContainer within sandbox \"0a059173eb3356645a1e784ceb5837ecb9ce6dc11698dd9eaa9f77931c3df0a3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"04ef68acb57d68d0a9230471587ddea38705a33c42ae8b718588586c7a29f3c8\"" Jan 13 20:42:22.809930 containerd[1481]: time="2025-01-13T20:42:22.809815939Z" level=info msg="StartContainer for \"04ef68acb57d68d0a9230471587ddea38705a33c42ae8b718588586c7a29f3c8\"" Jan 13 20:42:22.863166 systemd[1]: Started cri-containerd-04ef68acb57d68d0a9230471587ddea38705a33c42ae8b718588586c7a29f3c8.scope - libcontainer container 04ef68acb57d68d0a9230471587ddea38705a33c42ae8b718588586c7a29f3c8. Jan 13 20:42:22.900685 containerd[1481]: time="2025-01-13T20:42:22.900432510Z" level=info msg="StartContainer for \"04ef68acb57d68d0a9230471587ddea38705a33c42ae8b718588586c7a29f3c8\" returns successfully" Jan 13 20:42:23.204662 kubelet[1847]: E0113 20:42:23.204488 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:23.633830 systemd[1]: run-containerd-runc-k8s.io-04ef68acb57d68d0a9230471587ddea38705a33c42ae8b718588586c7a29f3c8-runc.DHUjIn.mount: Deactivated successfully. Jan 13 20:42:24.204901 kubelet[1847]: E0113 20:42:24.204828 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:24.210767 systemd-networkd[1409]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 20:42:25.206086 kubelet[1847]: E0113 20:42:25.205991 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:26.206269 kubelet[1847]: E0113 20:42:26.206195 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:26.474110 ntpd[1452]: Listen normally on 14 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 20:42:26.474844 ntpd[1452]: 13 Jan 20:42:26 ntpd[1452]: Listen normally on 14 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Jan 13 20:42:27.207529 kubelet[1847]: E0113 20:42:27.207452 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:28.208437 kubelet[1847]: E0113 20:42:28.208319 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:29.209018 kubelet[1847]: E0113 20:42:29.208910 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:30.209912 kubelet[1847]: E0113 20:42:30.209830 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:42:31.210586 kubelet[1847]: E0113 20:42:31.210483 1847 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"