Jan 13 20:22:03.862848 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:22:03.862871 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:56:28 -00 2025 Jan 13 20:22:03.862881 kernel: KASLR enabled Jan 13 20:22:03.862887 kernel: efi: EFI v2.7 by EDK II Jan 13 20:22:03.862893 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133c6b018 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132357218 Jan 13 20:22:03.862898 kernel: random: crng init done Jan 13 20:22:03.862905 kernel: secureboot: Secure boot disabled Jan 13 20:22:03.862911 kernel: ACPI: Early table checksum verification disabled Jan 13 20:22:03.862917 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 13 20:22:03.862922 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:22:03.862930 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862936 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862941 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862947 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862954 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862962 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862968 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862974 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862980 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:03.862986 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:22:03.862992 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 20:22:03.862998 kernel: NUMA: Failed to initialise from firmware Jan 13 20:22:03.863004 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:22:03.863010 kernel: NUMA: NODE_DATA [mem 0x139821800-0x139826fff] Jan 13 20:22:03.863016 kernel: Zone ranges: Jan 13 20:22:03.863022 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:22:03.863030 kernel: DMA32 empty Jan 13 20:22:03.863036 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 13 20:22:03.863042 kernel: Movable zone start for each node Jan 13 20:22:03.863048 kernel: Early memory node ranges Jan 13 20:22:03.863054 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 13 20:22:03.863060 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 13 20:22:03.863066 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 13 20:22:03.863072 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 13 20:22:03.863078 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 13 20:22:03.863084 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:22:03.863090 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 13 20:22:03.863097 kernel: psci: probing for conduit method from ACPI. Jan 13 20:22:03.863103 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:22:03.863109 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:22:03.863118 kernel: psci: Trusted OS migration not required Jan 13 20:22:03.863124 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:22:03.863131 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:22:03.863139 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:22:03.863145 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:22:03.863152 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:22:03.863158 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:22:03.863165 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:22:03.863171 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:22:03.863178 kernel: CPU features: detected: Spectre-v4 Jan 13 20:22:03.863184 kernel: CPU features: detected: Spectre-BHB Jan 13 20:22:03.863191 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:22:03.863197 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:22:03.863203 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:22:03.863211 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:22:03.863218 kernel: alternatives: applying boot alternatives Jan 13 20:22:03.863225 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:22:03.863232 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:22:03.863238 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:22:03.863245 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:22:03.863251 kernel: Fallback order for Node 0: 0 Jan 13 20:22:03.863258 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 13 20:22:03.863299 kernel: Policy zone: Normal Jan 13 20:22:03.863308 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:22:03.863314 kernel: software IO TLB: area num 2. Jan 13 20:22:03.863323 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 13 20:22:03.863330 kernel: Memory: 3881024K/4096000K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 214976K reserved, 0K cma-reserved) Jan 13 20:22:03.863336 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:22:03.863343 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:22:03.863350 kernel: rcu: RCU event tracing is enabled. Jan 13 20:22:03.863357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:22:03.863363 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:22:03.863370 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:22:03.863376 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:22:03.863383 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:22:03.863389 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:22:03.863397 kernel: GICv3: 256 SPIs implemented Jan 13 20:22:03.863420 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:22:03.863429 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:22:03.863435 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:22:03.864084 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:22:03.864104 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:22:03.864115 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:22:03.864125 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:22:03.864132 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 13 20:22:03.864139 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 13 20:22:03.864145 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:22:03.864157 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:03.864163 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:22:03.864170 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:22:03.864177 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:22:03.864184 kernel: Console: colour dummy device 80x25 Jan 13 20:22:03.864190 kernel: ACPI: Core revision 20230628 Jan 13 20:22:03.864197 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:22:03.864204 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:22:03.864211 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:22:03.864217 kernel: landlock: Up and running. Jan 13 20:22:03.864225 kernel: SELinux: Initializing. Jan 13 20:22:03.864232 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:22:03.864239 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:22:03.864245 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:22:03.864252 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:22:03.864259 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:22:03.864280 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:22:03.864287 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:22:03.864294 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:22:03.864303 kernel: Remapping and enabling EFI services. Jan 13 20:22:03.864310 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:22:03.864317 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:22:03.864323 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:22:03.864330 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 13 20:22:03.864337 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:03.864344 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:22:03.864350 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:22:03.864357 kernel: SMP: Total of 2 processors activated. Jan 13 20:22:03.864363 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:22:03.864372 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:22:03.864379 kernel: CPU features: detected: Common not Private translations Jan 13 20:22:03.864390 kernel: CPU features: detected: CRC32 instructions Jan 13 20:22:03.864398 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:22:03.864416 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:22:03.864424 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:22:03.864431 kernel: CPU features: detected: Privileged Access Never Jan 13 20:22:03.864438 kernel: CPU features: detected: RAS Extension Support Jan 13 20:22:03.864445 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:22:03.864454 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:22:03.864461 kernel: alternatives: applying system-wide alternatives Jan 13 20:22:03.864468 kernel: devtmpfs: initialized Jan 13 20:22:03.864475 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:22:03.864482 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:22:03.864489 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:22:03.864496 kernel: SMBIOS 3.0.0 present. Jan 13 20:22:03.864504 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 13 20:22:03.864511 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:22:03.864518 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:22:03.864525 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:22:03.864532 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:22:03.864539 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:22:03.864546 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Jan 13 20:22:03.864553 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:22:03.864560 kernel: cpuidle: using governor menu Jan 13 20:22:03.864569 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:22:03.864576 kernel: ASID allocator initialised with 32768 entries Jan 13 20:22:03.864583 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:22:03.864590 kernel: Serial: AMBA PL011 UART driver Jan 13 20:22:03.864597 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:22:03.864604 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:22:03.864611 kernel: Modules: 508880 pages in range for PLT usage Jan 13 20:22:03.864618 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:22:03.864625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:22:03.864633 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:22:03.864640 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:22:03.864647 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:22:03.864654 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:22:03.864661 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:22:03.864668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:22:03.864675 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:22:03.864682 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:22:03.864689 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:22:03.864697 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:22:03.864704 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:22:03.864711 kernel: ACPI: Interpreter enabled Jan 13 20:22:03.864718 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:22:03.864725 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:22:03.864732 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:22:03.864739 kernel: printk: console [ttyAMA0] enabled Jan 13 20:22:03.864746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:22:03.864885 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:22:03.864958 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:22:03.865020 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:22:03.865079 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:22:03.865141 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:22:03.865150 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:22:03.865157 kernel: PCI host bridge to bus 0000:00 Jan 13 20:22:03.865223 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:22:03.865330 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:22:03.865392 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:22:03.865474 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:22:03.865552 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:22:03.865624 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 13 20:22:03.865689 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 13 20:22:03.865758 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:22:03.865828 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.865891 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 13 20:22:03.865960 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.866023 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 13 20:22:03.866092 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.866157 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 13 20:22:03.866229 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.866308 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 13 20:22:03.866377 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.866462 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 13 20:22:03.866532 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.866599 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 13 20:22:03.866668 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.866731 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 13 20:22:03.866799 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.866861 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 13 20:22:03.866928 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:03.866989 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 13 20:22:03.867061 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 13 20:22:03.867123 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 13 20:22:03.867194 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:22:03.867261 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 13 20:22:03.867370 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:22:03.867495 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:22:03.867577 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:22:03.867643 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 13 20:22:03.867721 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 13 20:22:03.867787 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 13 20:22:03.867852 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 20:22:03.867936 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 13 20:22:03.868003 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 20:22:03.868092 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:22:03.868158 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 20:22:03.868229 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 13 20:22:03.868308 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 13 20:22:03.868374 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:22:03.868505 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:22:03.868577 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 13 20:22:03.868645 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 13 20:22:03.868709 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:22:03.868774 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 20:22:03.868835 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:22:03.868895 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:22:03.868959 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 20:22:03.869020 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 20:22:03.869080 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 20:22:03.869143 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:22:03.869203 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:22:03.869273 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:22:03.869344 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:22:03.871445 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 20:22:03.871574 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 20:22:03.871641 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:22:03.871703 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 20:22:03.871768 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jan 13 20:22:03.871832 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:22:03.871895 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:22:03.871957 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:22:03.872025 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:22:03.872087 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:22:03.872148 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:22:03.872213 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:22:03.872295 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:22:03.872363 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:22:03.872440 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:22:03.872504 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:22:03.872572 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:22:03.872637 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 13 20:22:03.872699 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:22:03.872762 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 13 20:22:03.872824 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:22:03.872887 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 13 20:22:03.872949 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:22:03.873015 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 13 20:22:03.873077 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:22:03.873139 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 13 20:22:03.873201 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:22:03.873295 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 13 20:22:03.873372 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:22:03.874585 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 13 20:22:03.874660 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:22:03.874724 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 13 20:22:03.874786 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:22:03.874848 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 13 20:22:03.874909 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:22:03.874974 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 13 20:22:03.875042 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 13 20:22:03.875104 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 13 20:22:03.875166 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:22:03.875229 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 13 20:22:03.875341 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:22:03.875433 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 13 20:22:03.875500 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:22:03.875562 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 13 20:22:03.875627 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:22:03.875688 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 13 20:22:03.875748 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:22:03.875809 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 13 20:22:03.875871 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:22:03.875933 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 13 20:22:03.875996 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:22:03.876058 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 13 20:22:03.876121 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:22:03.876183 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 13 20:22:03.876245 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 13 20:22:03.876333 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 13 20:22:03.878450 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 13 20:22:03.878564 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:22:03.878632 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 13 20:22:03.878695 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 13 20:22:03.878761 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:22:03.878823 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 20:22:03.878884 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:22:03.878951 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 13 20:22:03.879012 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 13 20:22:03.879075 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:22:03.879135 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 20:22:03.879195 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:22:03.879261 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:22:03.879348 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 13 20:22:03.880440 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 13 20:22:03.880528 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:22:03.880596 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 20:22:03.880659 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:22:03.881228 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:22:03.881328 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 13 20:22:03.881394 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:22:03.881508 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 20:22:03.881578 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:22:03.881652 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 13 20:22:03.881719 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 13 20:22:03.881780 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:22:03.881840 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 20:22:03.881899 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:22:03.881966 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 13 20:22:03.882029 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 13 20:22:03.882091 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 13 20:22:03.882150 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:22:03.882213 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 20:22:03.882311 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:22:03.882390 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 13 20:22:03.883112 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 13 20:22:03.883192 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 13 20:22:03.883255 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 13 20:22:03.883338 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:22:03.883401 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 20:22:03.883495 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:22:03.883558 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 13 20:22:03.883618 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:22:03.883677 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 20:22:03.883759 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:22:03.883824 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 13 20:22:03.883885 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 13 20:22:03.883947 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 20:22:03.884011 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:22:03.884073 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:22:03.884127 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:22:03.884192 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:22:03.884259 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:22:03.884365 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 20:22:03.884508 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:22:03.884589 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 13 20:22:03.884647 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 20:22:03.884702 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:22:03.884766 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 13 20:22:03.884824 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 20:22:03.884878 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:22:03.884949 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:22:03.885005 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 20:22:03.885062 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:22:03.885134 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 13 20:22:03.885192 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 20:22:03.885248 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:22:03.885332 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 13 20:22:03.885417 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 20:22:03.885480 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:22:03.885544 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 13 20:22:03.885602 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 20:22:03.885664 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:22:03.885729 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 13 20:22:03.885786 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 20:22:03.885844 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:22:03.885907 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 13 20:22:03.885965 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 20:22:03.886023 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:22:03.886036 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:22:03.886044 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:22:03.886052 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:22:03.886059 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:22:03.886067 kernel: iommu: Default domain type: Translated Jan 13 20:22:03.886074 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:22:03.886081 kernel: efivars: Registered efivars operations Jan 13 20:22:03.886089 kernel: vgaarb: loaded Jan 13 20:22:03.886096 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:22:03.886105 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:22:03.886112 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:22:03.886120 kernel: pnp: PnP ACPI init Jan 13 20:22:03.886189 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:22:03.886200 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:22:03.886208 kernel: NET: Registered PF_INET protocol family Jan 13 20:22:03.886215 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:22:03.886223 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:22:03.886233 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:22:03.886240 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:22:03.886248 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:22:03.886255 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:22:03.886273 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:22:03.886283 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:22:03.886291 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:22:03.886367 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 20:22:03.886378 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:22:03.886388 kernel: kvm [1]: HYP mode not available Jan 13 20:22:03.886396 kernel: Initialise system trusted keyrings Jan 13 20:22:03.886428 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:22:03.886436 kernel: Key type asymmetric registered Jan 13 20:22:03.886443 kernel: Asymmetric key parser 'x509' registered Jan 13 20:22:03.886451 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:22:03.886458 kernel: io scheduler mq-deadline registered Jan 13 20:22:03.886466 kernel: io scheduler kyber registered Jan 13 20:22:03.886473 kernel: io scheduler bfq registered Jan 13 20:22:03.886483 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:22:03.886559 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 13 20:22:03.886629 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 13 20:22:03.886694 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.886757 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 13 20:22:03.886819 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 13 20:22:03.886884 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.886949 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 13 20:22:03.887011 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 13 20:22:03.887073 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.887137 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 13 20:22:03.887199 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 13 20:22:03.887296 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.887373 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 13 20:22:03.887492 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 13 20:22:03.887559 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.887621 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 13 20:22:03.887697 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 13 20:22:03.887766 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.887828 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 13 20:22:03.887891 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 13 20:22:03.887953 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.888015 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 13 20:22:03.888088 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 13 20:22:03.888154 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.888164 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 20:22:03.888227 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 13 20:22:03.888305 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 13 20:22:03.888370 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:03.888380 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:22:03.888388 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:22:03.888395 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:22:03.888552 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 13 20:22:03.888623 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 20:22:03.888688 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 13 20:22:03.888698 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:22:03.888706 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:22:03.888767 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 13 20:22:03.888778 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 13 20:22:03.888785 kernel: thunder_xcv, ver 1.0 Jan 13 20:22:03.888797 kernel: thunder_bgx, ver 1.0 Jan 13 20:22:03.888804 kernel: nicpf, ver 1.0 Jan 13 20:22:03.888812 kernel: nicvf, ver 1.0 Jan 13 20:22:03.888889 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:22:03.888948 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:22:03 UTC (1736799723) Jan 13 20:22:03.888958 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:22:03.888965 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:22:03.888973 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:22:03.888982 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:22:03.888990 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:22:03.888997 kernel: Segment Routing with IPv6 Jan 13 20:22:03.889005 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:22:03.889012 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:22:03.889019 kernel: Key type dns_resolver registered Jan 13 20:22:03.889026 kernel: registered taskstats version 1 Jan 13 20:22:03.889034 kernel: Loading compiled-in X.509 certificates Jan 13 20:22:03.889041 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 46cb4d1b22f3a5974766fe7d7b651e2f296d4fe0' Jan 13 20:22:03.889050 kernel: Key type .fscrypt registered Jan 13 20:22:03.889057 kernel: Key type fscrypt-provisioning registered Jan 13 20:22:03.889064 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:22:03.889072 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:22:03.889079 kernel: ima: No architecture policies found Jan 13 20:22:03.889086 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:22:03.889094 kernel: clk: Disabling unused clocks Jan 13 20:22:03.889101 kernel: Freeing unused kernel memory: 39936K Jan 13 20:22:03.889108 kernel: Run /init as init process Jan 13 20:22:03.889117 kernel: with arguments: Jan 13 20:22:03.889124 kernel: /init Jan 13 20:22:03.889131 kernel: with environment: Jan 13 20:22:03.889138 kernel: HOME=/ Jan 13 20:22:03.889146 kernel: TERM=linux Jan 13 20:22:03.889153 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:22:03.889162 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:22:03.889172 systemd[1]: Detected virtualization kvm. Jan 13 20:22:03.889181 systemd[1]: Detected architecture arm64. Jan 13 20:22:03.889189 systemd[1]: Running in initrd. Jan 13 20:22:03.889197 systemd[1]: No hostname configured, using default hostname. Jan 13 20:22:03.889204 systemd[1]: Hostname set to . Jan 13 20:22:03.889212 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:22:03.889220 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:22:03.889228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:03.889236 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:03.889246 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:22:03.889254 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:22:03.889262 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:22:03.889307 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:22:03.889316 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:22:03.889324 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:22:03.889336 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:03.889344 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:03.889352 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:22:03.889361 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:22:03.889369 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:22:03.889377 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:22:03.889385 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:22:03.889393 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:22:03.889401 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:22:03.889454 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:22:03.889462 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:03.889470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:03.889478 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:03.889486 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:22:03.889494 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:22:03.889502 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:22:03.889510 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:22:03.889520 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:22:03.889528 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:22:03.889536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:22:03.889544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:03.889577 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 20:22:03.889599 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:22:03.889608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:03.889616 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:22:03.889625 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:22:03.889634 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:22:03.889642 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:03.889650 kernel: Bridge firewalling registered Jan 13 20:22:03.889658 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:03.889667 systemd-journald[238]: Journal started Jan 13 20:22:03.889690 systemd-journald[238]: Runtime Journal (/run/log/journal/3e68d4fa835f4d198106323e89743737) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:22:03.861777 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 20:22:03.891531 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:03.885337 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 20:22:03.896444 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:22:03.898468 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:22:03.899820 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:22:03.910059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:22:03.913059 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:22:03.918425 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:03.923575 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:22:03.926700 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:03.931206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:03.938731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:03.947583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:22:03.954523 dracut-cmdline[268]: dracut-dracut-053 Jan 13 20:22:03.960662 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9798117b3b15ef802e3d618077f87253cc08e0d5280b8fe28b307e7558b7ebcc Jan 13 20:22:03.979188 systemd-resolved[275]: Positive Trust Anchors: Jan 13 20:22:03.979208 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:22:03.979238 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:22:03.984754 systemd-resolved[275]: Defaulting to hostname 'linux'. Jan 13 20:22:03.989104 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:22:03.990255 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:04.057508 kernel: SCSI subsystem initialized Jan 13 20:22:04.062477 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:22:04.069456 kernel: iscsi: registered transport (tcp) Jan 13 20:22:04.082430 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:22:04.082489 kernel: QLogic iSCSI HBA Driver Jan 13 20:22:04.129214 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:22:04.135652 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:22:04.161719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:22:04.161797 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:22:04.161813 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:22:04.210490 kernel: raid6: neonx8 gen() 15692 MB/s Jan 13 20:22:04.227472 kernel: raid6: neonx4 gen() 15742 MB/s Jan 13 20:22:04.244455 kernel: raid6: neonx2 gen() 11318 MB/s Jan 13 20:22:04.261507 kernel: raid6: neonx1 gen() 9817 MB/s Jan 13 20:22:04.278465 kernel: raid6: int64x8 gen() 6345 MB/s Jan 13 20:22:04.295499 kernel: raid6: int64x4 gen() 5351 MB/s Jan 13 20:22:04.312490 kernel: raid6: int64x2 gen() 5941 MB/s Jan 13 20:22:04.329574 kernel: raid6: int64x1 gen() 5036 MB/s Jan 13 20:22:04.329655 kernel: raid6: using algorithm neonx4 gen() 15742 MB/s Jan 13 20:22:04.346465 kernel: raid6: .... xor() 12371 MB/s, rmw enabled Jan 13 20:22:04.346530 kernel: raid6: using neon recovery algorithm Jan 13 20:22:04.351561 kernel: xor: measuring software checksum speed Jan 13 20:22:04.351610 kernel: 8regs : 21613 MB/sec Jan 13 20:22:04.351628 kernel: 32regs : 21699 MB/sec Jan 13 20:22:04.352439 kernel: arm64_neon : 27860 MB/sec Jan 13 20:22:04.352478 kernel: xor: using function: arm64_neon (27860 MB/sec) Jan 13 20:22:04.400484 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:22:04.413835 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:22:04.424682 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:04.438281 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 13 20:22:04.441415 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:04.451589 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:22:04.465766 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 13 20:22:04.505048 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:22:04.512582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:22:04.560784 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:04.568752 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:22:04.585015 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:22:04.587719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:22:04.588290 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:04.589964 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:22:04.596778 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:22:04.614474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:22:04.666656 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:22:04.670803 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:22:04.670872 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 13 20:22:04.705225 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:22:04.705358 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:04.707074 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:04.707891 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:04.708164 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:04.709065 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:04.718578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:04.721430 kernel: ACPI: bus type USB registered Jan 13 20:22:04.726448 kernel: usbcore: registered new interface driver usbfs Jan 13 20:22:04.726497 kernel: usbcore: registered new interface driver hub Jan 13 20:22:04.726516 kernel: usbcore: registered new device driver usb Jan 13 20:22:04.735425 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 13 20:22:04.738014 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 13 20:22:04.738118 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:22:04.738128 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:22:04.743157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:04.750727 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 13 20:22:04.757856 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 13 20:22:04.757957 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 13 20:22:04.758047 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 13 20:22:04.758141 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:22:04.758240 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:22:04.758251 kernel: GPT:17805311 != 80003071 Jan 13 20:22:04.758270 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:22:04.758281 kernel: GPT:17805311 != 80003071 Jan 13 20:22:04.758294 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:22:04.758304 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:22:04.758313 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 13 20:22:04.754916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:04.768198 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:22:04.774560 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:22:04.774666 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:22:04.774744 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:22:04.774821 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:22:04.774906 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:22:04.774982 kernel: hub 1-0:1.0: USB hub found Jan 13 20:22:04.775073 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:22:04.775147 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:22:04.775235 kernel: hub 2-0:1.0: USB hub found Jan 13 20:22:04.775339 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:22:04.779949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:04.814438 kernel: BTRFS: device fsid 2be7cc1c-29d4-4496-b29b-8561323213d2 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (519) Jan 13 20:22:04.818433 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (527) Jan 13 20:22:04.822593 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 13 20:22:04.833123 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 13 20:22:04.839017 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 13 20:22:04.841024 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 13 20:22:04.846923 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:22:04.854575 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:22:04.859423 disk-uuid[577]: Primary Header is updated. Jan 13 20:22:04.859423 disk-uuid[577]: Secondary Entries is updated. Jan 13 20:22:04.859423 disk-uuid[577]: Secondary Header is updated. Jan 13 20:22:04.865458 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:22:05.012472 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:22:05.254572 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 20:22:05.391496 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 20:22:05.392482 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 20:22:05.393463 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 20:22:05.447564 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 20:22:05.447945 kernel: usbcore: registered new interface driver usbhid Jan 13 20:22:05.448511 kernel: usbhid: USB HID core driver Jan 13 20:22:05.875466 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:22:05.876455 disk-uuid[578]: The operation has completed successfully. Jan 13 20:22:05.925647 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:22:05.927520 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:22:05.941674 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:22:05.946953 sh[593]: Success Jan 13 20:22:05.958454 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:22:06.016921 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:22:06.026336 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:22:06.029032 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:22:06.045812 kernel: BTRFS info (device dm-0): first mount of filesystem 2be7cc1c-29d4-4496-b29b-8561323213d2 Jan 13 20:22:06.045882 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:06.046727 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:22:06.046770 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:22:06.047449 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:22:06.053437 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:22:06.055341 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:22:06.055964 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:22:06.064766 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:22:06.069950 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:22:06.083540 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:22:06.083615 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:06.083641 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:22:06.089944 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:22:06.090009 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:22:06.098659 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:22:06.100443 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:22:06.105148 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:22:06.111083 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:22:06.216315 ignition[676]: Ignition 2.20.0 Jan 13 20:22:06.216327 ignition[676]: Stage: fetch-offline Jan 13 20:22:06.216362 ignition[676]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:06.216371 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:06.218782 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:22:06.216579 ignition[676]: parsed url from cmdline: "" Jan 13 20:22:06.219650 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:22:06.216583 ignition[676]: no config URL provided Jan 13 20:22:06.216588 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:22:06.216596 ignition[676]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:22:06.216602 ignition[676]: failed to fetch config: resource requires networking Jan 13 20:22:06.217936 ignition[676]: Ignition finished successfully Jan 13 20:22:06.225648 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:22:06.247592 systemd-networkd[781]: lo: Link UP Jan 13 20:22:06.247601 systemd-networkd[781]: lo: Gained carrier Jan 13 20:22:06.249119 systemd-networkd[781]: Enumeration completed Jan 13 20:22:06.249808 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:06.249811 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:06.250116 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:22:06.251627 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:06.251630 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:06.251706 systemd[1]: Reached target network.target - Network. Jan 13 20:22:06.252102 systemd-networkd[781]: eth0: Link UP Jan 13 20:22:06.252106 systemd-networkd[781]: eth0: Gained carrier Jan 13 20:22:06.252112 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:06.258812 systemd-networkd[781]: eth1: Link UP Jan 13 20:22:06.258815 systemd-networkd[781]: eth1: Gained carrier Jan 13 20:22:06.258822 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:06.259129 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:22:06.272217 ignition[783]: Ignition 2.20.0 Jan 13 20:22:06.272227 ignition[783]: Stage: fetch Jan 13 20:22:06.272461 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:06.272471 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:06.272561 ignition[783]: parsed url from cmdline: "" Jan 13 20:22:06.272564 ignition[783]: no config URL provided Jan 13 20:22:06.272569 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:22:06.272575 ignition[783]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:22:06.272657 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 13 20:22:06.273348 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 20:22:06.295503 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:22:06.315509 systemd-networkd[781]: eth0: DHCPv4 address 138.199.153.211/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:22:06.473456 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 13 20:22:06.480720 ignition[783]: GET result: OK Jan 13 20:22:06.480798 ignition[783]: parsing config with SHA512: 11ded17009038e2dca9799e62565d6c135450e35bda59d2f6636053fe7778437195484d0ce6f093ffed3d9f8f5f5eee8e0936341e240c6a579918954c818efc7 Jan 13 20:22:06.486520 unknown[783]: fetched base config from "system" Jan 13 20:22:06.487246 unknown[783]: fetched base config from "system" Jan 13 20:22:06.487270 unknown[783]: fetched user config from "hetzner" Jan 13 20:22:06.488826 ignition[783]: fetch: fetch complete Jan 13 20:22:06.488831 ignition[783]: fetch: fetch passed Jan 13 20:22:06.488895 ignition[783]: Ignition finished successfully Jan 13 20:22:06.493445 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:22:06.498655 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:22:06.511291 ignition[790]: Ignition 2.20.0 Jan 13 20:22:06.511301 ignition[790]: Stage: kargs Jan 13 20:22:06.511495 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:06.511514 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:06.512205 ignition[790]: kargs: kargs passed Jan 13 20:22:06.512248 ignition[790]: Ignition finished successfully Jan 13 20:22:06.515283 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:22:06.520567 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:22:06.533777 ignition[797]: Ignition 2.20.0 Jan 13 20:22:06.533803 ignition[797]: Stage: disks Jan 13 20:22:06.534588 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:06.534601 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:06.536907 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:22:06.535305 ignition[797]: disks: disks passed Jan 13 20:22:06.539675 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:22:06.535345 ignition[797]: Ignition finished successfully Jan 13 20:22:06.540599 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:22:06.541574 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:22:06.542723 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:22:06.543619 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:22:06.555801 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:22:06.574151 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:22:06.578498 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:22:06.587569 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:22:06.643424 kernel: EXT4-fs (sda9): mounted filesystem f9a95e53-2d63-4443-b523-cb2108fb48f6 r/w with ordered data mode. Quota mode: none. Jan 13 20:22:06.644489 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:22:06.646089 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:22:06.653566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:22:06.656766 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:22:06.659594 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:22:06.666563 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:22:06.668378 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:22:06.670579 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (813) Jan 13 20:22:06.670621 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:22:06.671985 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:06.672052 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:22:06.675350 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:22:06.683637 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:22:06.683692 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:22:06.682635 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:22:06.687436 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:22:06.736293 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:22:06.737273 coreos-metadata[815]: Jan 13 20:22:06.736 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 13 20:22:06.738355 coreos-metadata[815]: Jan 13 20:22:06.737 INFO Fetch successful Jan 13 20:22:06.738355 coreos-metadata[815]: Jan 13 20:22:06.737 INFO wrote hostname ci-4186-1-0-d-95a4635712 to /sysroot/etc/hostname Jan 13 20:22:06.740243 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:22:06.746340 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:22:06.751213 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:22:06.756008 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:22:06.849953 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:22:06.854596 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:22:06.858665 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:22:06.865500 kernel: BTRFS info (device sda6): last unmount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:22:06.884890 ignition[931]: INFO : Ignition 2.20.0 Jan 13 20:22:06.884890 ignition[931]: INFO : Stage: mount Jan 13 20:22:06.885850 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:06.885850 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:06.887720 ignition[931]: INFO : mount: mount passed Jan 13 20:22:06.887720 ignition[931]: INFO : Ignition finished successfully Jan 13 20:22:06.887098 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:22:06.888373 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:22:06.896614 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:22:07.045615 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:22:07.052003 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:22:07.065473 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Jan 13 20:22:07.067705 kernel: BTRFS info (device sda6): first mount of filesystem 9f8ecb6c-ace6-4d16-8781-f4e964dc0779 Jan 13 20:22:07.067754 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:07.067776 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:22:07.071590 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:22:07.071641 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:22:07.074780 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:22:07.094353 ignition[960]: INFO : Ignition 2.20.0 Jan 13 20:22:07.094353 ignition[960]: INFO : Stage: files Jan 13 20:22:07.095332 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:07.095332 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:07.095332 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:22:07.097868 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:22:07.097868 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:22:07.100935 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:22:07.101881 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:22:07.101881 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:22:07.101773 unknown[960]: wrote ssh authorized keys file for user: core Jan 13 20:22:07.104939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:22:07.104939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:22:07.104939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:22:07.104939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:22:07.104939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:07.104939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:07.104939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:07.104939 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 20:22:07.439770 systemd-networkd[781]: eth0: Gained IPv6LL Jan 13 20:22:07.692150 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:22:08.079635 systemd-networkd[781]: eth1: Gained IPv6LL Jan 13 20:22:08.140175 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:08.140175 ignition[960]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 20:22:08.143332 ignition[960]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:22:08.143332 ignition[960]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:22:08.143332 ignition[960]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 20:22:08.143332 ignition[960]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:22:08.143332 ignition[960]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:22:08.143332 ignition[960]: INFO : files: files passed Jan 13 20:22:08.143332 ignition[960]: INFO : Ignition finished successfully Jan 13 20:22:08.144741 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:22:08.151647 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:22:08.155604 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:22:08.159630 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:22:08.159730 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:22:08.182828 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:08.182828 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:08.185227 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:08.187064 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:22:08.187936 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:22:08.195688 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:22:08.219168 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:22:08.220105 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:22:08.222337 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:22:08.223691 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:22:08.225617 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:22:08.235582 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:22:08.256182 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:22:08.264751 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:22:08.277523 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:08.278890 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:08.279612 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:22:08.280560 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:22:08.280685 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:22:08.282071 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:22:08.282800 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:22:08.283939 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:22:08.285170 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:22:08.286224 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:22:08.287425 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:22:08.288495 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:22:08.289582 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:22:08.290650 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:22:08.291669 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:22:08.292519 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:22:08.292644 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:22:08.293800 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:08.294439 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:08.295375 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:22:08.295808 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:08.296488 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:22:08.296626 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:22:08.298042 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:22:08.298157 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:22:08.299208 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:22:08.299365 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:22:08.300232 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:22:08.300340 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:22:08.311802 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:22:08.313095 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:22:08.313465 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:08.318228 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:22:08.318855 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:22:08.319001 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:08.323472 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:22:08.323587 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:22:08.334797 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:22:08.334939 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:22:08.341249 ignition[1013]: INFO : Ignition 2.20.0 Jan 13 20:22:08.341249 ignition[1013]: INFO : Stage: umount Jan 13 20:22:08.341249 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:08.341249 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:08.352442 ignition[1013]: INFO : umount: umount passed Jan 13 20:22:08.352442 ignition[1013]: INFO : Ignition finished successfully Jan 13 20:22:08.346076 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:22:08.349088 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:22:08.351016 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:22:08.354395 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:22:08.355104 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:22:08.356234 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:22:08.356294 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:22:08.357940 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:22:08.357979 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:22:08.358582 systemd[1]: Stopped target network.target - Network. Jan 13 20:22:08.359028 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:22:08.359072 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:22:08.362055 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:22:08.362731 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:22:08.366664 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:08.367278 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:22:08.367803 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:22:08.371329 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:22:08.371382 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:22:08.372082 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:22:08.372117 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:22:08.374877 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:22:08.374938 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:22:08.375742 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:22:08.375780 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:22:08.378570 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:22:08.379642 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:22:08.384583 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 13 20:22:08.386969 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:22:08.387065 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:22:08.388466 systemd-networkd[781]: eth1: DHCPv6 lease lost Jan 13 20:22:08.389779 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:22:08.391041 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:22:08.392431 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:22:08.392565 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:22:08.395523 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:22:08.395574 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:08.396122 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:22:08.396164 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:22:08.402631 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:22:08.403066 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:22:08.403122 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:22:08.404664 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:22:08.404705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:08.406017 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:22:08.406057 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:08.406620 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:22:08.406653 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:08.407357 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:08.418209 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:22:08.418376 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:22:08.428725 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:22:08.430498 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:08.434217 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:22:08.434570 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:08.436616 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:22:08.436793 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:08.438380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:22:08.438487 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:22:08.440926 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:22:08.440976 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:22:08.442455 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:22:08.442505 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:08.453616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:22:08.454168 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:22:08.454229 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:08.457353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:08.457455 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:08.463693 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:22:08.463821 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:22:08.465204 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:22:08.470716 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:22:08.478215 systemd[1]: Switching root. Jan 13 20:22:08.507970 systemd-journald[238]: Journal stopped Jan 13 20:22:09.372358 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 20:22:09.377168 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:22:09.377200 kernel: SELinux: policy capability open_perms=1 Jan 13 20:22:09.377210 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:22:09.377219 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:22:09.377228 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:22:09.377248 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:22:09.377266 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:22:09.377278 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:22:09.377291 kernel: audit: type=1403 audit(1736799728.640:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:22:09.377302 systemd[1]: Successfully loaded SELinux policy in 35.921ms. Jan 13 20:22:09.377321 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.353ms. Jan 13 20:22:09.377332 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:22:09.377342 systemd[1]: Detected virtualization kvm. Jan 13 20:22:09.377352 systemd[1]: Detected architecture arm64. Jan 13 20:22:09.377364 systemd[1]: Detected first boot. Jan 13 20:22:09.377374 systemd[1]: Hostname set to . Jan 13 20:22:09.378500 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:22:09.378526 zram_generator::config[1056]: No configuration found. Jan 13 20:22:09.378546 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:22:09.378557 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:22:09.378567 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:22:09.378583 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:22:09.378594 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:22:09.378607 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:22:09.378617 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:22:09.378628 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:22:09.378638 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:22:09.378649 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:22:09.378662 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:22:09.378672 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:22:09.378685 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:09.378697 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:09.378707 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:22:09.378717 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:22:09.378728 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:22:09.378738 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:22:09.378748 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:22:09.378758 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:09.378768 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:22:09.378780 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:22:09.378790 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:22:09.378803 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:22:09.378813 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:09.378823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:22:09.378833 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:22:09.378844 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:22:09.378857 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:22:09.378868 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:22:09.378881 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:09.378895 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:09.378905 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:09.378915 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:22:09.378925 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:22:09.378934 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:22:09.378945 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:22:09.378955 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:22:09.378966 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:22:09.378976 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:22:09.378987 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:22:09.378997 systemd[1]: Reached target machines.target - Containers. Jan 13 20:22:09.379007 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:22:09.379017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:09.379027 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:22:09.379037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:22:09.379049 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:09.379059 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:22:09.379069 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:09.379078 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:22:09.379088 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:09.379099 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:22:09.379110 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:22:09.379120 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:22:09.379138 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:22:09.379152 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:22:09.379162 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:22:09.379172 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:22:09.379182 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:22:09.379193 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:22:09.379203 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:22:09.379213 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:22:09.379223 systemd[1]: Stopped verity-setup.service. Jan 13 20:22:09.379379 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:22:09.379398 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:22:09.380464 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:22:09.380485 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:22:09.380497 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:22:09.380510 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:22:09.380522 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:09.380533 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:22:09.380551 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:22:09.380566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:09.380580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:09.380595 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:09.380608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:09.380620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:09.380633 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:22:09.380645 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:22:09.380659 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:22:09.380671 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:22:09.380684 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:22:09.380695 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:22:09.380707 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:22:09.380718 kernel: fuse: init (API version 7.39) Jan 13 20:22:09.380729 kernel: ACPI: bus type drm_connector registered Jan 13 20:22:09.380745 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:22:09.380785 systemd-journald[1123]: Collecting audit messages is disabled. Jan 13 20:22:09.380812 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:22:09.380824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:09.380835 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:22:09.380850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:22:09.380861 kernel: loop: module loaded Jan 13 20:22:09.380873 systemd-journald[1123]: Journal started Jan 13 20:22:09.380903 systemd-journald[1123]: Runtime Journal (/run/log/journal/3e68d4fa835f4d198106323e89743737) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:22:09.386467 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:22:09.104492 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:22:09.122428 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:22:09.123046 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:22:09.388459 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:22:09.396628 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:22:09.398707 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:22:09.401474 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:22:09.402370 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:22:09.402551 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:22:09.405026 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:22:09.405493 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:22:09.406766 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:09.407317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:09.408798 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:22:09.410530 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:22:09.412071 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:22:09.449088 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:22:09.460445 kernel: loop0: detected capacity change from 0 to 116784 Jan 13 20:22:09.461530 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:22:09.465584 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:22:09.471647 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:22:09.472281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:22:09.475937 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:22:09.479112 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:22:09.490671 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:09.498911 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:09.501435 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:22:09.506281 systemd-journald[1123]: Time spent on flushing to /var/log/journal/3e68d4fa835f4d198106323e89743737 is 32.177ms for 1114 entries. Jan 13 20:22:09.506281 systemd-journald[1123]: System Journal (/var/log/journal/3e68d4fa835f4d198106323e89743737) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:22:09.549856 systemd-journald[1123]: Received client request to flush runtime journal. Jan 13 20:22:09.549901 kernel: loop1: detected capacity change from 0 to 189592 Jan 13 20:22:09.509336 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:22:09.532212 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:22:09.533686 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:22:09.551929 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:22:09.564462 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:22:09.566335 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:22:09.575074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:22:09.585435 kernel: loop2: detected capacity change from 0 to 113552 Jan 13 20:22:09.623167 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 13 20:22:09.623530 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 13 20:22:09.630696 kernel: loop3: detected capacity change from 0 to 8 Jan 13 20:22:09.636502 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:09.656498 kernel: loop4: detected capacity change from 0 to 116784 Jan 13 20:22:09.671483 kernel: loop5: detected capacity change from 0 to 189592 Jan 13 20:22:09.701446 kernel: loop6: detected capacity change from 0 to 113552 Jan 13 20:22:09.730734 kernel: loop7: detected capacity change from 0 to 8 Jan 13 20:22:09.732028 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 13 20:22:09.733345 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 13 20:22:09.740076 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:22:09.740430 systemd[1]: Reloading... Jan 13 20:22:09.852344 zram_generator::config[1223]: No configuration found. Jan 13 20:22:09.922618 ldconfig[1147]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:22:09.982851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:22:10.028319 systemd[1]: Reloading finished in 287 ms. Jan 13 20:22:10.055678 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:22:10.057837 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:22:10.067662 systemd[1]: Starting ensure-sysext.service... Jan 13 20:22:10.071615 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:22:10.087949 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:22:10.099766 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:10.102277 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:22:10.102295 systemd[1]: Reloading... Jan 13 20:22:10.105288 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:22:10.105829 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:22:10.106513 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:22:10.106716 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 20:22:10.106761 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 13 20:22:10.113209 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:22:10.114454 systemd-tmpfiles[1261]: Skipping /boot Jan 13 20:22:10.132047 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:22:10.132169 systemd-tmpfiles[1261]: Skipping /boot Jan 13 20:22:10.150081 systemd-udevd[1264]: Using default interface naming scheme 'v255'. Jan 13 20:22:10.171528 zram_generator::config[1289]: No configuration found. Jan 13 20:22:10.347471 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:22:10.386530 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:22:10.386586 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1310) Jan 13 20:22:10.418286 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:22:10.418589 systemd[1]: Reloading finished in 316 ms. Jan 13 20:22:10.435021 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:10.440806 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:10.479722 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:22:10.483666 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:22:10.484564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:10.486666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:10.490647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:10.495224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:10.496129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:10.497945 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:22:10.504057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:22:10.513820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:22:10.518437 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:22:10.520166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:10.521318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:10.523949 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 13 20:22:10.543219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:10.544479 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:10.556674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:10.561676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:10.565072 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:22:10.568885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:10.570631 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:10.574635 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:22:10.576116 systemd[1]: Finished ensure-sysext.service. Jan 13 20:22:10.577916 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:10.578056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:10.583370 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:22:10.586598 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:22:10.597446 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 13 20:22:10.598456 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:22:10.598496 kernel: [drm] features: -context_init Jan 13 20:22:10.603436 kernel: [drm] number of scanouts: 1 Jan 13 20:22:10.603493 kernel: [drm] number of cap sets: 0 Jan 13 20:22:10.604734 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:22:10.605504 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 13 20:22:10.606900 augenrules[1406]: No rules Jan 13 20:22:10.613425 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:22:10.633527 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:22:10.633627 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:22:10.635193 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:22:10.636485 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:22:10.637941 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:22:10.639988 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:22:10.641008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:10.641212 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:10.642173 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:22:10.642465 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:22:10.643340 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:10.643683 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:10.644772 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:22:10.656299 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:22:10.656827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:22:10.665689 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:22:10.668665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:10.670305 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:22:10.671156 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:22:10.692619 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:22:10.754932 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:22:10.759511 systemd-resolved[1380]: Positive Trust Anchors: Jan 13 20:22:10.759533 systemd-resolved[1380]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:22:10.759566 systemd-resolved[1380]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:22:10.765770 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:22:10.767488 systemd-networkd[1377]: lo: Link UP Jan 13 20:22:10.767507 systemd-networkd[1377]: lo: Gained carrier Jan 13 20:22:10.768955 systemd-resolved[1380]: Using system hostname 'ci-4186-1-0-d-95a4635712'. Jan 13 20:22:10.769777 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:22:10.770456 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:22:10.771004 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:10.771044 systemd-networkd[1377]: Enumeration completed Jan 13 20:22:10.771614 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:10.771617 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:10.776106 systemd-networkd[1377]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:10.776121 systemd-networkd[1377]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:10.776644 systemd-networkd[1377]: eth0: Link UP Jan 13 20:22:10.776648 systemd-networkd[1377]: eth0: Gained carrier Jan 13 20:22:10.776660 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:10.777843 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:22:10.779569 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:22:10.780138 systemd[1]: Reached target network.target - Network. Jan 13 20:22:10.781174 systemd-networkd[1377]: eth1: Link UP Jan 13 20:22:10.781183 systemd-networkd[1377]: eth1: Gained carrier Jan 13 20:22:10.781197 systemd-networkd[1377]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:10.782108 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:22:10.788660 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:22:10.790882 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:10.820068 systemd-networkd[1377]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:22:10.821277 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:22:10.821742 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Jan 13 20:22:10.824084 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:10.824862 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:22:10.825494 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:22:10.826105 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:22:10.826924 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:22:10.827603 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:22:10.828274 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:22:10.828962 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:22:10.828998 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:22:10.829461 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:22:10.830916 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:22:10.832501 systemd-networkd[1377]: eth0: DHCPv4 address 138.199.153.211/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:22:10.833056 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:22:10.833536 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Jan 13 20:22:10.835031 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Jan 13 20:22:10.844661 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:22:10.847011 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:22:10.848534 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:22:10.849119 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:22:10.849653 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:22:10.850134 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:22:10.850161 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:22:10.853560 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:22:10.858489 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:22:10.858762 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:22:10.862619 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:22:10.867707 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:22:10.873193 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:22:10.873766 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:22:10.881804 jq[1448]: false Jan 13 20:22:10.882617 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:22:10.886040 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 13 20:22:10.889671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:22:10.891207 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:22:10.906432 coreos-metadata[1446]: Jan 13 20:22:10.906 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 13 20:22:10.912594 coreos-metadata[1446]: Jan 13 20:22:10.908 INFO Fetch successful Jan 13 20:22:10.912594 coreos-metadata[1446]: Jan 13 20:22:10.908 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 13 20:22:10.912594 coreos-metadata[1446]: Jan 13 20:22:10.910 INFO Fetch successful Jan 13 20:22:10.911608 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:22:10.913428 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:22:10.913894 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:22:10.917583 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:22:10.919575 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:22:10.922786 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:22:10.928731 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:22:10.929669 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:22:10.929967 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:22:10.930125 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:22:10.959025 dbus-daemon[1447]: [system] SELinux support is enabled Jan 13 20:22:10.967586 extend-filesystems[1451]: Found loop4 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found loop5 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found loop6 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found loop7 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found sda Jan 13 20:22:10.967586 extend-filesystems[1451]: Found sda1 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found sda2 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found sda3 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found usr Jan 13 20:22:10.967586 extend-filesystems[1451]: Found sda4 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found sda6 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found sda7 Jan 13 20:22:10.967586 extend-filesystems[1451]: Found sda9 Jan 13 20:22:10.967586 extend-filesystems[1451]: Checking size of /dev/sda9 Jan 13 20:22:10.962598 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:22:10.995599 update_engine[1464]: I20250113 20:22:10.985693 1464 main.cc:92] Flatcar Update Engine starting Jan 13 20:22:10.995787 extend-filesystems[1451]: Resized partition /dev/sda9 Jan 13 20:22:10.962779 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:22:10.997654 jq[1465]: true Jan 13 20:22:10.997826 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:22:11.008714 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 13 20:22:10.963895 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:22:11.008962 update_engine[1464]: I20250113 20:22:10.999626 1464 update_check_scheduler.cc:74] Next update check in 9m10s Jan 13 20:22:10.967834 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:22:10.967863 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:22:10.969610 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:22:10.969628 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:22:11.009380 jq[1481]: true Jan 13 20:22:10.994085 (ntainerd)[1480]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:22:10.999460 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:22:11.028485 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:22:11.091861 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:22:11.093381 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:22:11.114400 systemd-logind[1458]: New seat seat0. Jan 13 20:22:11.147073 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:22:11.160236 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:22:11.160443 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 13 20:22:11.177704 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:22:11.177841 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 20:22:11.178296 extend-filesystems[1486]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:22:11.178296 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 13 20:22:11.178296 extend-filesystems[1486]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 13 20:22:11.182515 extend-filesystems[1451]: Resized filesystem in /dev/sda9 Jan 13 20:22:11.182515 extend-filesystems[1451]: Found sr0 Jan 13 20:22:11.197757 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1299) Jan 13 20:22:11.194896 systemd[1]: Starting sshkeys.service... Jan 13 20:22:11.195882 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:22:11.197778 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:22:11.197966 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:22:11.229724 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:22:11.233728 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:22:11.289112 coreos-metadata[1525]: Jan 13 20:22:11.288 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 13 20:22:11.290346 coreos-metadata[1525]: Jan 13 20:22:11.290 INFO Fetch successful Jan 13 20:22:11.295709 unknown[1525]: wrote ssh authorized keys file for user: core Jan 13 20:22:11.311329 containerd[1480]: time="2025-01-13T20:22:11.311203560Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:22:11.319941 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:22:11.327402 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:22:11.328883 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:22:11.332461 systemd[1]: Finished sshkeys.service. Jan 13 20:22:11.344890 containerd[1480]: time="2025-01-13T20:22:11.344829920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.346450 containerd[1480]: time="2025-01-13T20:22:11.346417240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.346532 containerd[1480]: time="2025-01-13T20:22:11.346518880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:22:11.346586 containerd[1480]: time="2025-01-13T20:22:11.346574320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:22:11.346789 containerd[1480]: time="2025-01-13T20:22:11.346769040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:22:11.346874 containerd[1480]: time="2025-01-13T20:22:11.346859640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.347001 containerd[1480]: time="2025-01-13T20:22:11.346982520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.347054 containerd[1480]: time="2025-01-13T20:22:11.347041720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.347298 containerd[1480]: time="2025-01-13T20:22:11.347273520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.347362 containerd[1480]: time="2025-01-13T20:22:11.347349760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.347441 containerd[1480]: time="2025-01-13T20:22:11.347400720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.347507 containerd[1480]: time="2025-01-13T20:22:11.347493680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.347634 containerd[1480]: time="2025-01-13T20:22:11.347616840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.347883 containerd[1480]: time="2025-01-13T20:22:11.347863040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:11.348053 containerd[1480]: time="2025-01-13T20:22:11.348034600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:11.348108 containerd[1480]: time="2025-01-13T20:22:11.348096440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:22:11.348285 containerd[1480]: time="2025-01-13T20:22:11.348217960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:22:11.348399 containerd[1480]: time="2025-01-13T20:22:11.348382720Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:22:11.353398 containerd[1480]: time="2025-01-13T20:22:11.353365880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:22:11.353559 containerd[1480]: time="2025-01-13T20:22:11.353538960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:22:11.353681 containerd[1480]: time="2025-01-13T20:22:11.353664240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:22:11.353783 containerd[1480]: time="2025-01-13T20:22:11.353764640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:22:11.353860 containerd[1480]: time="2025-01-13T20:22:11.353844240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:22:11.354079 containerd[1480]: time="2025-01-13T20:22:11.354057000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:22:11.354581 containerd[1480]: time="2025-01-13T20:22:11.354548240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:22:11.354814 containerd[1480]: time="2025-01-13T20:22:11.354790520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:22:11.354901 containerd[1480]: time="2025-01-13T20:22:11.354884080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:22:11.354980 containerd[1480]: time="2025-01-13T20:22:11.354963040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:22:11.355054 containerd[1480]: time="2025-01-13T20:22:11.355037360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.355144 containerd[1480]: time="2025-01-13T20:22:11.355127280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.355219 containerd[1480]: time="2025-01-13T20:22:11.355200840Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.355367 containerd[1480]: time="2025-01-13T20:22:11.355345520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.355479 containerd[1480]: time="2025-01-13T20:22:11.355460680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.355555 containerd[1480]: time="2025-01-13T20:22:11.355538880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.355623 containerd[1480]: time="2025-01-13T20:22:11.355607040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.355680 containerd[1480]: time="2025-01-13T20:22:11.355669240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:22:11.355763 containerd[1480]: time="2025-01-13T20:22:11.355748760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.355839 containerd[1480]: time="2025-01-13T20:22:11.355825960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.355947 containerd[1480]: time="2025-01-13T20:22:11.355933120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356002 containerd[1480]: time="2025-01-13T20:22:11.355990920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356051 containerd[1480]: time="2025-01-13T20:22:11.356039600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356110 containerd[1480]: time="2025-01-13T20:22:11.356098880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356162 containerd[1480]: time="2025-01-13T20:22:11.356149680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356213 containerd[1480]: time="2025-01-13T20:22:11.356200120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356278 containerd[1480]: time="2025-01-13T20:22:11.356266480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356333 containerd[1480]: time="2025-01-13T20:22:11.356321960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356383 containerd[1480]: time="2025-01-13T20:22:11.356372360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356467 containerd[1480]: time="2025-01-13T20:22:11.356454160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356520 containerd[1480]: time="2025-01-13T20:22:11.356508720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356573 containerd[1480]: time="2025-01-13T20:22:11.356562080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:22:11.356634 containerd[1480]: time="2025-01-13T20:22:11.356621720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356686 containerd[1480]: time="2025-01-13T20:22:11.356675200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.356735 containerd[1480]: time="2025-01-13T20:22:11.356724760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:22:11.356956 containerd[1480]: time="2025-01-13T20:22:11.356942520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:22:11.357091 containerd[1480]: time="2025-01-13T20:22:11.357074080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:22:11.357147 containerd[1480]: time="2025-01-13T20:22:11.357134800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:22:11.357196 containerd[1480]: time="2025-01-13T20:22:11.357183440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:22:11.357251 containerd[1480]: time="2025-01-13T20:22:11.357239720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.357302 containerd[1480]: time="2025-01-13T20:22:11.357291280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:22:11.357347 containerd[1480]: time="2025-01-13T20:22:11.357336960Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:22:11.357418 containerd[1480]: time="2025-01-13T20:22:11.357392960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:22:11.357837 containerd[1480]: time="2025-01-13T20:22:11.357790880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:22:11.357991 containerd[1480]: time="2025-01-13T20:22:11.357977040Z" level=info msg="Connect containerd service" Jan 13 20:22:11.358070 containerd[1480]: time="2025-01-13T20:22:11.358058040Z" level=info msg="using legacy CRI server" Jan 13 20:22:11.358115 containerd[1480]: time="2025-01-13T20:22:11.358104080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:22:11.358466 containerd[1480]: time="2025-01-13T20:22:11.358445080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:22:11.359162 containerd[1480]: time="2025-01-13T20:22:11.359137960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:22:11.359439 containerd[1480]: time="2025-01-13T20:22:11.359393960Z" level=info msg="Start subscribing containerd event" Jan 13 20:22:11.359517 containerd[1480]: time="2025-01-13T20:22:11.359505480Z" level=info msg="Start recovering state" Jan 13 20:22:11.359614 containerd[1480]: time="2025-01-13T20:22:11.359601840Z" level=info msg="Start event monitor" Jan 13 20:22:11.359665 containerd[1480]: time="2025-01-13T20:22:11.359653920Z" level=info msg="Start snapshots syncer" Jan 13 20:22:11.359708 containerd[1480]: time="2025-01-13T20:22:11.359698120Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:22:11.360242 containerd[1480]: time="2025-01-13T20:22:11.359759520Z" level=info msg="Start streaming server" Jan 13 20:22:11.360516 containerd[1480]: time="2025-01-13T20:22:11.360497000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:22:11.360613 containerd[1480]: time="2025-01-13T20:22:11.360601680Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:22:11.360776 containerd[1480]: time="2025-01-13T20:22:11.360763320Z" level=info msg="containerd successfully booted in 0.052539s" Jan 13 20:22:11.360854 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:22:11.575177 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:22:11.597107 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:22:11.605833 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:22:11.612728 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:22:11.612960 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:22:11.620756 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:22:11.634844 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:22:11.642872 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:22:11.646917 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:22:11.649511 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:22:11.983737 systemd-networkd[1377]: eth1: Gained IPv6LL Jan 13 20:22:11.984829 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Jan 13 20:22:11.987869 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:22:11.989599 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:22:12.006774 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:12.009432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:22:12.032991 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:22:12.431736 systemd-networkd[1377]: eth0: Gained IPv6LL Jan 13 20:22:12.432488 systemd-timesyncd[1412]: Network configuration changed, trying to establish connection. Jan 13 20:22:12.676519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:12.677974 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:22:12.679712 systemd[1]: Startup finished in 720ms (kernel) + 4.957s (initrd) + 4.075s (userspace) = 9.752s. Jan 13 20:22:12.686000 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:12.697102 agetty[1552]: failed to open credentials directory Jan 13 20:22:12.697342 agetty[1553]: failed to open credentials directory Jan 13 20:22:13.181358 kubelet[1570]: E0113 20:22:13.181209 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:13.184296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:13.184604 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:23.435181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:22:23.444793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:23.548457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:23.552998 (kubelet)[1589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:23.596644 kubelet[1589]: E0113 20:22:23.596579 1589 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:23.601562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:23.601712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:33.852479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:22:33.861948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:33.951010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:33.955771 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:33.999743 kubelet[1605]: E0113 20:22:33.999628 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:34.002080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:34.002232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:42.868316 systemd-timesyncd[1412]: Contacted time server 167.235.139.237:123 (2.flatcar.pool.ntp.org). Jan 13 20:22:42.868456 systemd-timesyncd[1412]: Initial clock synchronization to Mon 2025-01-13 20:22:42.964929 UTC. Jan 13 20:22:44.253692 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:22:44.259664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:44.412682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:44.413362 (kubelet)[1620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:44.451530 kubelet[1620]: E0113 20:22:44.451471 1620 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:44.453793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:44.453968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:54.708578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:22:54.718786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:54.851189 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:54.852089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:54.897810 kubelet[1635]: E0113 20:22:54.895875 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:54.898601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:54.898755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:55.946265 update_engine[1464]: I20250113 20:22:55.946123 1464 update_attempter.cc:509] Updating boot flags... Jan 13 20:22:55.996552 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1651) Jan 13 20:22:56.046816 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1654) Jan 13 20:22:56.124449 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1654) Jan 13 20:23:04.903655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:23:04.910883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:05.015900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:05.020700 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:05.066944 kubelet[1671]: E0113 20:23:05.066877 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:05.069868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:05.070167 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:15.153923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 13 20:23:15.161752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:15.260762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:15.272971 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:15.319937 kubelet[1686]: E0113 20:23:15.319823 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:15.322152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:15.322282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:25.403972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 13 20:23:25.409662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:25.546662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:25.549148 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:25.583843 kubelet[1701]: E0113 20:23:25.583796 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:25.585970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:25.586120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:35.653875 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 13 20:23:35.659664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:35.781725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:35.782111 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:35.821262 kubelet[1716]: E0113 20:23:35.821219 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:35.823757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:35.823900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:45.903770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 13 20:23:45.910718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:46.009721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:46.010199 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:46.051965 kubelet[1731]: E0113 20:23:46.051870 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:46.054994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:46.055203 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:56.153921 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 13 20:23:56.162782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:56.278304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:56.289226 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:56.327514 kubelet[1746]: E0113 20:23:56.327380 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:56.330594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:56.330772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:24:04.243009 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:24:04.248848 systemd[1]: Started sshd@0-138.199.153.211:22-139.178.89.65:35920.service - OpenSSH per-connection server daemon (139.178.89.65:35920). Jan 13 20:24:05.236894 sshd[1754]: Accepted publickey for core from 139.178.89.65 port 35920 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:05.239337 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:05.248171 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:24:05.259885 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:24:05.263162 systemd-logind[1458]: New session 1 of user core. Jan 13 20:24:05.275285 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:24:05.281812 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:24:05.286496 (systemd)[1758]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:24:05.394897 systemd[1758]: Queued start job for default target default.target. Jan 13 20:24:05.404133 systemd[1758]: Created slice app.slice - User Application Slice. Jan 13 20:24:05.404184 systemd[1758]: Reached target paths.target - Paths. Jan 13 20:24:05.404205 systemd[1758]: Reached target timers.target - Timers. Jan 13 20:24:05.406019 systemd[1758]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:24:05.420076 systemd[1758]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:24:05.420311 systemd[1758]: Reached target sockets.target - Sockets. Jan 13 20:24:05.420350 systemd[1758]: Reached target basic.target - Basic System. Jan 13 20:24:05.420542 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:24:05.421525 systemd[1758]: Reached target default.target - Main User Target. Jan 13 20:24:05.421605 systemd[1758]: Startup finished in 128ms. Jan 13 20:24:05.432759 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:24:06.117882 systemd[1]: Started sshd@1-138.199.153.211:22-139.178.89.65:35928.service - OpenSSH per-connection server daemon (139.178.89.65:35928). Jan 13 20:24:06.403774 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 13 20:24:06.410723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:24:06.527659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:06.527785 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:24:06.567078 kubelet[1779]: E0113 20:24:06.567018 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:24:06.569899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:24:06.570107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:24:07.104326 sshd[1769]: Accepted publickey for core from 139.178.89.65 port 35928 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:07.106158 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:07.112286 systemd-logind[1458]: New session 2 of user core. Jan 13 20:24:07.116598 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:24:07.777826 sshd[1786]: Connection closed by 139.178.89.65 port 35928 Jan 13 20:24:07.778903 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:07.783579 systemd[1]: sshd@1-138.199.153.211:22-139.178.89.65:35928.service: Deactivated successfully. Jan 13 20:24:07.786431 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:24:07.788862 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:24:07.790605 systemd-logind[1458]: Removed session 2. Jan 13 20:24:07.958881 systemd[1]: Started sshd@2-138.199.153.211:22-139.178.89.65:35944.service - OpenSSH per-connection server daemon (139.178.89.65:35944). Jan 13 20:24:08.935643 sshd[1791]: Accepted publickey for core from 139.178.89.65 port 35944 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:08.939587 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:08.945518 systemd-logind[1458]: New session 3 of user core. Jan 13 20:24:08.950786 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:24:09.607019 sshd[1793]: Connection closed by 139.178.89.65 port 35944 Jan 13 20:24:09.606545 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:09.612261 systemd[1]: sshd@2-138.199.153.211:22-139.178.89.65:35944.service: Deactivated successfully. Jan 13 20:24:09.612638 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:24:09.615508 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:24:09.616709 systemd-logind[1458]: Removed session 3. Jan 13 20:24:09.782795 systemd[1]: Started sshd@3-138.199.153.211:22-139.178.89.65:35952.service - OpenSSH per-connection server daemon (139.178.89.65:35952). Jan 13 20:24:10.765686 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 35952 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:10.768160 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:10.773505 systemd-logind[1458]: New session 4 of user core. Jan 13 20:24:10.778643 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:24:11.446704 sshd[1800]: Connection closed by 139.178.89.65 port 35952 Jan 13 20:24:11.447991 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:11.452048 systemd[1]: sshd@3-138.199.153.211:22-139.178.89.65:35952.service: Deactivated successfully. Jan 13 20:24:11.453969 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:24:11.454800 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:24:11.456956 systemd-logind[1458]: Removed session 4. Jan 13 20:24:11.621121 systemd[1]: Started sshd@4-138.199.153.211:22-139.178.89.65:59998.service - OpenSSH per-connection server daemon (139.178.89.65:59998). Jan 13 20:24:12.599664 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 59998 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:12.601795 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:12.608471 systemd-logind[1458]: New session 5 of user core. Jan 13 20:24:12.614723 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:24:13.129776 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:24:13.130058 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:24:13.149593 sudo[1808]: pam_unix(sudo:session): session closed for user root Jan 13 20:24:13.309600 sshd[1807]: Connection closed by 139.178.89.65 port 59998 Jan 13 20:24:13.310826 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:13.316043 systemd[1]: sshd@4-138.199.153.211:22-139.178.89.65:59998.service: Deactivated successfully. Jan 13 20:24:13.318711 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:24:13.319834 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:24:13.323041 systemd-logind[1458]: Removed session 5. Jan 13 20:24:13.483750 systemd[1]: Started sshd@5-138.199.153.211:22-139.178.89.65:60000.service - OpenSSH per-connection server daemon (139.178.89.65:60000). Jan 13 20:24:14.460649 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 60000 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:14.462576 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:14.466712 systemd-logind[1458]: New session 6 of user core. Jan 13 20:24:14.481730 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:24:14.978574 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:24:14.978901 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:24:14.983253 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 13 20:24:14.988660 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:24:14.989003 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:24:15.002965 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:24:15.028724 augenrules[1839]: No rules Jan 13 20:24:15.029911 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:24:15.030161 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:24:15.032076 sudo[1816]: pam_unix(sudo:session): session closed for user root Jan 13 20:24:15.189541 sshd[1815]: Connection closed by 139.178.89.65 port 60000 Jan 13 20:24:15.190163 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:15.195972 systemd[1]: sshd@5-138.199.153.211:22-139.178.89.65:60000.service: Deactivated successfully. Jan 13 20:24:15.199837 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:24:15.201265 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:24:15.203633 systemd-logind[1458]: Removed session 6. Jan 13 20:24:15.368003 systemd[1]: Started sshd@6-138.199.153.211:22-139.178.89.65:60012.service - OpenSSH per-connection server daemon (139.178.89.65:60012). Jan 13 20:24:16.359116 sshd[1847]: Accepted publickey for core from 139.178.89.65 port 60012 ssh2: RSA SHA256:mP9Np05W8ayjbouGSSYjPkGP0Fk3DK/yb5iC6Sb3lHc Jan 13 20:24:16.360843 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:16.366842 systemd-logind[1458]: New session 7 of user core. Jan 13 20:24:16.371701 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:24:16.653449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 13 20:24:16.658627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:24:16.768222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:16.780923 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:24:16.820701 kubelet[1858]: E0113 20:24:16.820626 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:24:16.822659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:24:16.822778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:24:16.887274 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:24:16.887585 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:24:17.435241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:17.449192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:24:17.477276 systemd[1]: Reloading requested from client PID 1897 ('systemctl') (unit session-7.scope)... Jan 13 20:24:17.477291 systemd[1]: Reloading... Jan 13 20:24:17.580445 zram_generator::config[1936]: No configuration found. Jan 13 20:24:17.678239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:24:17.743281 systemd[1]: Reloading finished in 265 ms. Jan 13 20:24:17.791519 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:24:17.791600 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:24:17.791860 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:17.798861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:24:17.916389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:17.932969 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:24:17.977838 kubelet[1984]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:24:17.977838 kubelet[1984]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:24:17.977838 kubelet[1984]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:24:17.978350 kubelet[1984]: I0113 20:24:17.977931 1984 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:24:18.865583 kubelet[1984]: I0113 20:24:18.865528 1984 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:24:18.866436 kubelet[1984]: I0113 20:24:18.865744 1984 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:24:18.866436 kubelet[1984]: I0113 20:24:18.866234 1984 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:24:18.893211 kubelet[1984]: I0113 20:24:18.893167 1984 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:24:18.907030 kubelet[1984]: E0113 20:24:18.906946 1984 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:24:18.907030 kubelet[1984]: I0113 20:24:18.906987 1984 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:24:18.910880 kubelet[1984]: I0113 20:24:18.910856 1984 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:24:18.911999 kubelet[1984]: I0113 20:24:18.911972 1984 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:24:18.912669 kubelet[1984]: I0113 20:24:18.912211 1984 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:24:18.912669 kubelet[1984]: I0113 20:24:18.912317 1984 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.4","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:24:18.912823 kubelet[1984]: I0113 20:24:18.912683 1984 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:24:18.912823 kubelet[1984]: I0113 20:24:18.912695 1984 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:24:18.912918 kubelet[1984]: I0113 20:24:18.912886 1984 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:24:18.915600 kubelet[1984]: I0113 20:24:18.915111 1984 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:24:18.915600 kubelet[1984]: I0113 20:24:18.915148 1984 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:24:18.915600 kubelet[1984]: I0113 20:24:18.915177 1984 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:24:18.915600 kubelet[1984]: I0113 20:24:18.915192 1984 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:24:18.915600 kubelet[1984]: E0113 20:24:18.915360 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:18.915600 kubelet[1984]: E0113 20:24:18.915395 1984 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:18.918914 kubelet[1984]: I0113 20:24:18.918896 1984 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:24:18.921071 kubelet[1984]: I0113 20:24:18.921047 1984 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:24:18.921309 kubelet[1984]: W0113 20:24:18.921297 1984 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:24:18.922101 kubelet[1984]: I0113 20:24:18.922085 1984 server.go:1269] "Started kubelet" Jan 13 20:24:18.923342 kubelet[1984]: I0113 20:24:18.923278 1984 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:24:18.924882 kubelet[1984]: I0113 20:24:18.923721 1984 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:24:18.924882 kubelet[1984]: I0113 20:24:18.923865 1984 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:24:18.924882 kubelet[1984]: I0113 20:24:18.924138 1984 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:24:18.925135 kubelet[1984]: I0113 20:24:18.925105 1984 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:24:18.932369 kubelet[1984]: I0113 20:24:18.932294 1984 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:24:18.934710 kubelet[1984]: W0113 20:24:18.934674 1984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:24:18.936455 kubelet[1984]: E0113 20:24:18.934783 1984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:24:18.936455 kubelet[1984]: W0113 20:24:18.934926 1984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:24:18.936455 kubelet[1984]: E0113 20:24:18.934943 1984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:24:18.936455 kubelet[1984]: I0113 20:24:18.935697 1984 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:24:18.936455 kubelet[1984]: I0113 20:24:18.935781 1984 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:24:18.936455 kubelet[1984]: I0113 20:24:18.935850 1984 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:24:18.940085 kubelet[1984]: E0113 20:24:18.940039 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:18.940437 kubelet[1984]: I0113 20:24:18.940348 1984 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:24:18.940502 kubelet[1984]: I0113 20:24:18.940471 1984 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:24:18.942429 kubelet[1984]: E0113 20:24:18.941083 1984 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.181a5a450cff84bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-01-13 20:24:18.922063035 +0000 UTC m=+0.983842783,LastTimestamp:2025-01-13 20:24:18.922063035 +0000 UTC m=+0.983842783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 13 20:24:18.942566 kubelet[1984]: W0113 20:24:18.942519 1984 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:24:18.942566 kubelet[1984]: E0113 20:24:18.942542 1984 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 13 20:24:18.942658 kubelet[1984]: E0113 20:24:18.942604 1984 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:24:18.945199 kubelet[1984]: I0113 20:24:18.945168 1984 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:24:18.955577 kubelet[1984]: E0113 20:24:18.955554 1984 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:24:18.955863 kubelet[1984]: E0113 20:24:18.955775 1984 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.181a5a450e38d7e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-01-13 20:24:18.94259709 +0000 UTC m=+1.004376958,LastTimestamp:2025-01-13 20:24:18.94259709 +0000 UTC m=+1.004376958,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 13 20:24:18.966536 kubelet[1984]: I0113 20:24:18.966509 1984 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:24:18.966536 kubelet[1984]: I0113 20:24:18.966528 1984 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:24:18.966536 kubelet[1984]: I0113 20:24:18.966545 1984 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:24:18.990313 kubelet[1984]: I0113 20:24:18.990277 1984 policy_none.go:49] "None policy: Start" Jan 13 20:24:18.993187 kubelet[1984]: I0113 20:24:18.992573 1984 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:24:18.993187 kubelet[1984]: I0113 20:24:18.992618 1984 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:24:18.998523 kubelet[1984]: I0113 20:24:18.998479 1984 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:24:19.000085 kubelet[1984]: I0113 20:24:18.999732 1984 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:24:19.000085 kubelet[1984]: I0113 20:24:18.999752 1984 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:24:19.000085 kubelet[1984]: I0113 20:24:18.999769 1984 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:24:19.000085 kubelet[1984]: E0113 20:24:18.999865 1984 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:24:19.006334 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:24:19.025680 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:24:19.029966 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:24:19.040104 kubelet[1984]: I0113 20:24:19.039965 1984 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:24:19.040823 kubelet[1984]: I0113 20:24:19.040366 1984 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:24:19.040823 kubelet[1984]: E0113 20:24:19.040403 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.040823 kubelet[1984]: I0113 20:24:19.040386 1984 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:24:19.040823 kubelet[1984]: I0113 20:24:19.040757 1984 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:24:19.043207 kubelet[1984]: E0113 20:24:19.043156 1984 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Jan 13 20:24:19.144261 kubelet[1984]: I0113 20:24:19.142120 1984 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.4" Jan 13 20:24:19.161982 kubelet[1984]: I0113 20:24:19.161776 1984 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.4" Jan 13 20:24:19.161982 kubelet[1984]: E0113 20:24:19.161835 1984 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": node \"10.0.0.4\" not found" Jan 13 20:24:19.169939 kubelet[1984]: I0113 20:24:19.169870 1984 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:24:19.170580 containerd[1480]: time="2025-01-13T20:24:19.170505801Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:24:19.171070 kubelet[1984]: I0113 20:24:19.170909 1984 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:24:19.185526 kubelet[1984]: E0113 20:24:19.185464 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.286589 kubelet[1984]: E0113 20:24:19.286519 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.387179 kubelet[1984]: E0113 20:24:19.387100 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.461387 sudo[1865]: pam_unix(sudo:session): session closed for user root Jan 13 20:24:19.488313 kubelet[1984]: E0113 20:24:19.488240 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.589063 kubelet[1984]: E0113 20:24:19.588967 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.621308 sshd[1849]: Connection closed by 139.178.89.65 port 60012 Jan 13 20:24:19.622228 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:19.626644 systemd[1]: sshd@6-138.199.153.211:22-139.178.89.65:60012.service: Deactivated successfully. Jan 13 20:24:19.629594 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:24:19.631067 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:24:19.632607 systemd-logind[1458]: Removed session 7. Jan 13 20:24:19.689975 kubelet[1984]: E0113 20:24:19.689919 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.790842 kubelet[1984]: E0113 20:24:19.790657 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.869554 kubelet[1984]: I0113 20:24:19.869466 1984 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:24:19.869827 kubelet[1984]: W0113 20:24:19.869750 1984 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:24:19.891725 kubelet[1984]: E0113 20:24:19.891666 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:19.916138 kubelet[1984]: E0113 20:24:19.916069 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:19.992326 kubelet[1984]: E0113 20:24:19.992261 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:20.092952 kubelet[1984]: E0113 20:24:20.092795 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:20.193611 kubelet[1984]: E0113 20:24:20.193557 1984 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:20.917128 kubelet[1984]: E0113 20:24:20.917065 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:20.918161 kubelet[1984]: I0113 20:24:20.918124 1984 apiserver.go:52] "Watching apiserver" Jan 13 20:24:20.923336 kubelet[1984]: E0113 20:24:20.923281 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:20.933072 systemd[1]: Created slice kubepods-besteffort-pode7a4e4fa_cd41_4737_8cf0_7da80ffec3cd.slice - libcontainer container kubepods-besteffort-pode7a4e4fa_cd41_4737_8cf0_7da80ffec3cd.slice. Jan 13 20:24:20.938282 kubelet[1984]: I0113 20:24:20.937806 1984 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:24:20.946788 systemd[1]: Created slice kubepods-besteffort-podc0a5e35c_4a84_429d_8fc2_4dc00618f541.slice - libcontainer container kubepods-besteffort-podc0a5e35c_4a84_429d_8fc2_4dc00618f541.slice. Jan 13 20:24:20.948313 kubelet[1984]: I0113 20:24:20.948242 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ebb7c666-d950-48d5-86f1-1fa7d2125320-kubelet-dir\") pod \"csi-node-driver-dtldl\" (UID: \"ebb7c666-d950-48d5-86f1-1fa7d2125320\") " pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:20.949325 kubelet[1984]: I0113 20:24:20.949283 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-lib-modules\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949325 kubelet[1984]: I0113 20:24:20.949321 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-cni-log-dir\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949450 kubelet[1984]: I0113 20:24:20.949338 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4vfg\" (UniqueName: \"kubernetes.io/projected/ebb7c666-d950-48d5-86f1-1fa7d2125320-kube-api-access-q4vfg\") pod \"csi-node-driver-dtldl\" (UID: \"ebb7c666-d950-48d5-86f1-1fa7d2125320\") " pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:20.949450 kubelet[1984]: I0113 20:24:20.949356 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0a5e35c-4a84-429d-8fc2-4dc00618f541-kube-proxy\") pod \"kube-proxy-4kjm9\" (UID: \"c0a5e35c-4a84-429d-8fc2-4dc00618f541\") " pod="kube-system/kube-proxy-4kjm9" Jan 13 20:24:20.949450 kubelet[1984]: I0113 20:24:20.949369 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8s7t\" (UniqueName: \"kubernetes.io/projected/c0a5e35c-4a84-429d-8fc2-4dc00618f541-kube-api-access-t8s7t\") pod \"kube-proxy-4kjm9\" (UID: \"c0a5e35c-4a84-429d-8fc2-4dc00618f541\") " pod="kube-system/kube-proxy-4kjm9" Jan 13 20:24:20.949450 kubelet[1984]: I0113 20:24:20.949384 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-tigera-ca-bundle\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949450 kubelet[1984]: I0113 20:24:20.949398 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-var-run-calico\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949556 kubelet[1984]: I0113 20:24:20.949542 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqv9\" (UniqueName: \"kubernetes.io/projected/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-kube-api-access-ngqv9\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949620 kubelet[1984]: I0113 20:24:20.949565 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ebb7c666-d950-48d5-86f1-1fa7d2125320-varrun\") pod \"csi-node-driver-dtldl\" (UID: \"ebb7c666-d950-48d5-86f1-1fa7d2125320\") " pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:20.949620 kubelet[1984]: I0113 20:24:20.949581 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-xtables-lock\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949620 kubelet[1984]: I0113 20:24:20.949594 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-policysync\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949620 kubelet[1984]: I0113 20:24:20.949609 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-node-certs\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949701 kubelet[1984]: I0113 20:24:20.949623 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-var-lib-calico\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949701 kubelet[1984]: I0113 20:24:20.949637 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-cni-net-dir\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949701 kubelet[1984]: I0113 20:24:20.949653 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ebb7c666-d950-48d5-86f1-1fa7d2125320-socket-dir\") pod \"csi-node-driver-dtldl\" (UID: \"ebb7c666-d950-48d5-86f1-1fa7d2125320\") " pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:20.949701 kubelet[1984]: I0113 20:24:20.949668 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ebb7c666-d950-48d5-86f1-1fa7d2125320-registration-dir\") pod \"csi-node-driver-dtldl\" (UID: \"ebb7c666-d950-48d5-86f1-1fa7d2125320\") " pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:20.949701 kubelet[1984]: I0113 20:24:20.949684 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0a5e35c-4a84-429d-8fc2-4dc00618f541-xtables-lock\") pod \"kube-proxy-4kjm9\" (UID: \"c0a5e35c-4a84-429d-8fc2-4dc00618f541\") " pod="kube-system/kube-proxy-4kjm9" Jan 13 20:24:20.949801 kubelet[1984]: I0113 20:24:20.949698 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0a5e35c-4a84-429d-8fc2-4dc00618f541-lib-modules\") pod \"kube-proxy-4kjm9\" (UID: \"c0a5e35c-4a84-429d-8fc2-4dc00618f541\") " pod="kube-system/kube-proxy-4kjm9" Jan 13 20:24:20.949801 kubelet[1984]: I0113 20:24:20.949713 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-cni-bin-dir\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:20.949801 kubelet[1984]: I0113 20:24:20.949727 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd-flexvol-driver-host\") pod \"calico-node-w7cjd\" (UID: \"e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd\") " pod="calico-system/calico-node-w7cjd" Jan 13 20:24:21.054002 kubelet[1984]: E0113 20:24:21.053662 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.054002 kubelet[1984]: W0113 20:24:21.053706 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.054002 kubelet[1984]: E0113 20:24:21.053741 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.054720 kubelet[1984]: E0113 20:24:21.054115 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.054720 kubelet[1984]: W0113 20:24:21.054133 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.054720 kubelet[1984]: E0113 20:24:21.054154 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.054720 kubelet[1984]: E0113 20:24:21.054426 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.054720 kubelet[1984]: W0113 20:24:21.054454 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.054720 kubelet[1984]: E0113 20:24:21.054472 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.055377 kubelet[1984]: E0113 20:24:21.054877 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.055377 kubelet[1984]: W0113 20:24:21.054897 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.055377 kubelet[1984]: E0113 20:24:21.054915 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.055377 kubelet[1984]: E0113 20:24:21.055206 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.055377 kubelet[1984]: W0113 20:24:21.055276 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.055377 kubelet[1984]: E0113 20:24:21.055291 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.055697 kubelet[1984]: E0113 20:24:21.055460 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.055697 kubelet[1984]: W0113 20:24:21.055470 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.055697 kubelet[1984]: E0113 20:24:21.055482 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.055697 kubelet[1984]: E0113 20:24:21.055614 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.055697 kubelet[1984]: W0113 20:24:21.055621 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.055697 kubelet[1984]: E0113 20:24:21.055630 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.055998 kubelet[1984]: E0113 20:24:21.055778 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.055998 kubelet[1984]: W0113 20:24:21.055787 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.055998 kubelet[1984]: E0113 20:24:21.055796 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.058002 kubelet[1984]: E0113 20:24:21.056629 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.058002 kubelet[1984]: W0113 20:24:21.056656 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.058002 kubelet[1984]: E0113 20:24:21.057130 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.058002 kubelet[1984]: E0113 20:24:21.057173 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.058002 kubelet[1984]: W0113 20:24:21.057191 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.058002 kubelet[1984]: E0113 20:24:21.057307 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.058002 kubelet[1984]: E0113 20:24:21.057707 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.058002 kubelet[1984]: W0113 20:24:21.057728 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.058002 kubelet[1984]: E0113 20:24:21.057788 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.060963 kubelet[1984]: E0113 20:24:21.060541 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.060963 kubelet[1984]: W0113 20:24:21.060556 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.060963 kubelet[1984]: E0113 20:24:21.060588 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.060963 kubelet[1984]: E0113 20:24:21.060737 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.060963 kubelet[1984]: W0113 20:24:21.060744 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.060963 kubelet[1984]: E0113 20:24:21.060767 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.060963 kubelet[1984]: E0113 20:24:21.060890 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.060963 kubelet[1984]: W0113 20:24:21.060898 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.060963 kubelet[1984]: E0113 20:24:21.060917 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.062653 kubelet[1984]: E0113 20:24:21.062397 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.062653 kubelet[1984]: W0113 20:24:21.062430 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.062653 kubelet[1984]: E0113 20:24:21.062502 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.062653 kubelet[1984]: E0113 20:24:21.062593 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.062653 kubelet[1984]: W0113 20:24:21.062600 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.063104 kubelet[1984]: E0113 20:24:21.062908 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.063104 kubelet[1984]: W0113 20:24:21.062919 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.063282 kubelet[1984]: E0113 20:24:21.063229 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.063282 kubelet[1984]: W0113 20:24:21.063242 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.063282 kubelet[1984]: E0113 20:24:21.063268 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.063362 kubelet[1984]: E0113 20:24:21.063310 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.063362 kubelet[1984]: E0113 20:24:21.063322 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.063708 kubelet[1984]: E0113 20:24:21.063601 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.063708 kubelet[1984]: W0113 20:24:21.063616 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.063708 kubelet[1984]: E0113 20:24:21.063639 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.063989 kubelet[1984]: E0113 20:24:21.063792 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.063989 kubelet[1984]: W0113 20:24:21.063803 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.063989 kubelet[1984]: E0113 20:24:21.063820 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.064151 kubelet[1984]: E0113 20:24:21.064121 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.064151 kubelet[1984]: W0113 20:24:21.064134 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.064313 kubelet[1984]: E0113 20:24:21.064243 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.064579 kubelet[1984]: E0113 20:24:21.064489 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.064579 kubelet[1984]: W0113 20:24:21.064513 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.064579 kubelet[1984]: E0113 20:24:21.064533 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.064953 kubelet[1984]: E0113 20:24:21.064866 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.064953 kubelet[1984]: W0113 20:24:21.064880 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.064953 kubelet[1984]: E0113 20:24:21.064904 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.065180 kubelet[1984]: E0113 20:24:21.065112 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.065180 kubelet[1984]: W0113 20:24:21.065123 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.065180 kubelet[1984]: E0113 20:24:21.065146 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.065628 kubelet[1984]: E0113 20:24:21.065525 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.065628 kubelet[1984]: W0113 20:24:21.065539 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.065628 kubelet[1984]: E0113 20:24:21.065562 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.065921 kubelet[1984]: E0113 20:24:21.065810 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.065921 kubelet[1984]: W0113 20:24:21.065822 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.065921 kubelet[1984]: E0113 20:24:21.065843 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.066110 kubelet[1984]: E0113 20:24:21.066096 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.066250 kubelet[1984]: W0113 20:24:21.066135 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.066250 kubelet[1984]: E0113 20:24:21.066159 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.066685 kubelet[1984]: E0113 20:24:21.066436 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.066685 kubelet[1984]: W0113 20:24:21.066449 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.066685 kubelet[1984]: E0113 20:24:21.066465 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.066685 kubelet[1984]: E0113 20:24:21.066685 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.066819 kubelet[1984]: W0113 20:24:21.066696 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.066819 kubelet[1984]: E0113 20:24:21.066735 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.066924 kubelet[1984]: E0113 20:24:21.066912 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.066924 kubelet[1984]: W0113 20:24:21.066924 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.066990 kubelet[1984]: E0113 20:24:21.066934 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.072736 kubelet[1984]: E0113 20:24:21.072632 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.072736 kubelet[1984]: W0113 20:24:21.072655 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.072736 kubelet[1984]: E0113 20:24:21.072674 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.076186 kubelet[1984]: E0113 20:24:21.076160 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.076186 kubelet[1984]: W0113 20:24:21.076186 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.076275 kubelet[1984]: E0113 20:24:21.076204 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.087448 kubelet[1984]: E0113 20:24:21.086639 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:21.087737 kubelet[1984]: W0113 20:24:21.087594 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:21.087737 kubelet[1984]: E0113 20:24:21.087624 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:21.248007 containerd[1480]: time="2025-01-13T20:24:21.247817007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w7cjd,Uid:e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd,Namespace:calico-system,Attempt:0,}" Jan 13 20:24:21.252990 containerd[1480]: time="2025-01-13T20:24:21.252826438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kjm9,Uid:c0a5e35c-4a84-429d-8fc2-4dc00618f541,Namespace:kube-system,Attempt:0,}" Jan 13 20:24:21.829158 containerd[1480]: time="2025-01-13T20:24:21.829095503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:24:21.832960 containerd[1480]: time="2025-01-13T20:24:21.832884075Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 13 20:24:21.834312 containerd[1480]: time="2025-01-13T20:24:21.834255727Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:24:21.835268 containerd[1480]: time="2025-01-13T20:24:21.835238358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:24:21.837276 containerd[1480]: time="2025-01-13T20:24:21.836919235Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:24:21.839819 containerd[1480]: time="2025-01-13T20:24:21.839762934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:24:21.840887 containerd[1480]: time="2025-01-13T20:24:21.840648610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.750935ms" Jan 13 20:24:21.844607 containerd[1480]: time="2025-01-13T20:24:21.844480060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.534059ms" Jan 13 20:24:21.917957 kubelet[1984]: E0113 20:24:21.917867 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:21.954780 containerd[1480]: time="2025-01-13T20:24:21.954577160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:21.954780 containerd[1480]: time="2025-01-13T20:24:21.954645477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:21.954780 containerd[1480]: time="2025-01-13T20:24:21.954660396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:21.955687 containerd[1480]: time="2025-01-13T20:24:21.955469556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:21.960261 containerd[1480]: time="2025-01-13T20:24:21.960024930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:21.960261 containerd[1480]: time="2025-01-13T20:24:21.960077648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:21.960261 containerd[1480]: time="2025-01-13T20:24:21.960088127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:21.960261 containerd[1480]: time="2025-01-13T20:24:21.960158044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:22.029597 systemd[1]: Started cri-containerd-2a692e3c1585f6903801300f6c83c984f0aa1e747af43983322897b58db3c37a.scope - libcontainer container 2a692e3c1585f6903801300f6c83c984f0aa1e747af43983322897b58db3c37a. Jan 13 20:24:22.031849 systemd[1]: Started cri-containerd-cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba.scope - libcontainer container cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba. Jan 13 20:24:22.068124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2989540853.mount: Deactivated successfully. Jan 13 20:24:22.071357 containerd[1480]: time="2025-01-13T20:24:22.071217805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w7cjd,Uid:e7a4e4fa-cd41-4737-8cf0-7da80ffec3cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba\"" Jan 13 20:24:22.072666 containerd[1480]: time="2025-01-13T20:24:22.072611899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kjm9,Uid:c0a5e35c-4a84-429d-8fc2-4dc00618f541,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a692e3c1585f6903801300f6c83c984f0aa1e747af43983322897b58db3c37a\"" Jan 13 20:24:22.075699 containerd[1480]: time="2025-01-13T20:24:22.075673754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:24:22.918500 kubelet[1984]: E0113 20:24:22.918467 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:23.001087 kubelet[1984]: E0113 20:24:23.000707 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:23.141529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206046724.mount: Deactivated successfully. Jan 13 20:24:23.462605 containerd[1480]: time="2025-01-13T20:24:23.462540239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:23.463972 containerd[1480]: time="2025-01-13T20:24:23.463907057Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771452" Jan 13 20:24:23.464504 containerd[1480]: time="2025-01-13T20:24:23.464398355Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:23.466741 containerd[1480]: time="2025-01-13T20:24:23.466680491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:23.467611 containerd[1480]: time="2025-01-13T20:24:23.467457176Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.391508035s" Jan 13 20:24:23.467611 containerd[1480]: time="2025-01-13T20:24:23.467488415Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 20:24:23.470242 containerd[1480]: time="2025-01-13T20:24:23.470196331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:24:23.470985 containerd[1480]: time="2025-01-13T20:24:23.470941338Z" level=info msg="CreateContainer within sandbox \"2a692e3c1585f6903801300f6c83c984f0aa1e747af43983322897b58db3c37a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:24:23.490770 containerd[1480]: time="2025-01-13T20:24:23.490686800Z" level=info msg="CreateContainer within sandbox \"2a692e3c1585f6903801300f6c83c984f0aa1e747af43983322897b58db3c37a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04e903b35984f6e9531fd52c71433d7f030e092e9e518ddadd4c73a1d6defeb9\"" Jan 13 20:24:23.498891 containerd[1480]: time="2025-01-13T20:24:23.498822510Z" level=info msg="StartContainer for \"04e903b35984f6e9531fd52c71433d7f030e092e9e518ddadd4c73a1d6defeb9\"" Jan 13 20:24:23.533881 systemd[1]: Started cri-containerd-04e903b35984f6e9531fd52c71433d7f030e092e9e518ddadd4c73a1d6defeb9.scope - libcontainer container 04e903b35984f6e9531fd52c71433d7f030e092e9e518ddadd4c73a1d6defeb9. Jan 13 20:24:23.567923 containerd[1480]: time="2025-01-13T20:24:23.567786456Z" level=info msg="StartContainer for \"04e903b35984f6e9531fd52c71433d7f030e092e9e518ddadd4c73a1d6defeb9\" returns successfully" Jan 13 20:24:23.919630 kubelet[1984]: E0113 20:24:23.919555 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:24.057499 kubelet[1984]: E0113 20:24:24.057455 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.057499 kubelet[1984]: W0113 20:24:24.057488 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.057716 kubelet[1984]: E0113 20:24:24.057528 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.057891 kubelet[1984]: E0113 20:24:24.057851 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.057891 kubelet[1984]: W0113 20:24:24.057873 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.057891 kubelet[1984]: E0113 20:24:24.057888 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.058156 kubelet[1984]: E0113 20:24:24.058140 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.058195 kubelet[1984]: W0113 20:24:24.058170 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.058195 kubelet[1984]: E0113 20:24:24.058186 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.058464 kubelet[1984]: E0113 20:24:24.058447 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.058515 kubelet[1984]: W0113 20:24:24.058467 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.058515 kubelet[1984]: E0113 20:24:24.058482 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.058759 kubelet[1984]: E0113 20:24:24.058739 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.058797 kubelet[1984]: W0113 20:24:24.058766 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.058797 kubelet[1984]: E0113 20:24:24.058784 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.059020 kubelet[1984]: E0113 20:24:24.059005 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.059054 kubelet[1984]: W0113 20:24:24.059022 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.059054 kubelet[1984]: E0113 20:24:24.059035 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.059311 kubelet[1984]: E0113 20:24:24.059239 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.059311 kubelet[1984]: W0113 20:24:24.059255 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.059311 kubelet[1984]: E0113 20:24:24.059268 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.059563 kubelet[1984]: E0113 20:24:24.059547 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.059596 kubelet[1984]: W0113 20:24:24.059565 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.059596 kubelet[1984]: E0113 20:24:24.059581 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.059891 kubelet[1984]: E0113 20:24:24.059867 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.059891 kubelet[1984]: W0113 20:24:24.059881 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.059891 kubelet[1984]: E0113 20:24:24.059894 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.060168 kubelet[1984]: E0113 20:24:24.060136 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.060168 kubelet[1984]: W0113 20:24:24.060169 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.060259 kubelet[1984]: E0113 20:24:24.060183 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.060472 kubelet[1984]: E0113 20:24:24.060456 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.060505 kubelet[1984]: W0113 20:24:24.060474 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.060539 kubelet[1984]: E0113 20:24:24.060488 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.060781 kubelet[1984]: E0113 20:24:24.060762 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.060839 kubelet[1984]: W0113 20:24:24.060789 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.060839 kubelet[1984]: E0113 20:24:24.060805 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.061153 kubelet[1984]: E0113 20:24:24.061140 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.061153 kubelet[1984]: W0113 20:24:24.061154 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.061224 kubelet[1984]: E0113 20:24:24.061164 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.061351 kubelet[1984]: E0113 20:24:24.061339 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.061351 kubelet[1984]: W0113 20:24:24.061350 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.061432 kubelet[1984]: E0113 20:24:24.061359 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.061553 kubelet[1984]: E0113 20:24:24.061541 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.061553 kubelet[1984]: W0113 20:24:24.061553 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.061618 kubelet[1984]: E0113 20:24:24.061562 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.061714 kubelet[1984]: E0113 20:24:24.061704 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.061714 kubelet[1984]: W0113 20:24:24.061714 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.061774 kubelet[1984]: E0113 20:24:24.061722 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.061949 kubelet[1984]: E0113 20:24:24.061935 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.061949 kubelet[1984]: W0113 20:24:24.061949 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.062015 kubelet[1984]: E0113 20:24:24.061960 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.062147 kubelet[1984]: E0113 20:24:24.062133 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.062147 kubelet[1984]: W0113 20:24:24.062146 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.062219 kubelet[1984]: E0113 20:24:24.062156 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.062322 kubelet[1984]: E0113 20:24:24.062311 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.062322 kubelet[1984]: W0113 20:24:24.062321 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.062382 kubelet[1984]: E0113 20:24:24.062356 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.062539 kubelet[1984]: E0113 20:24:24.062526 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.062539 kubelet[1984]: W0113 20:24:24.062539 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.062610 kubelet[1984]: E0113 20:24:24.062547 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.070886 kubelet[1984]: E0113 20:24:24.070271 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.070886 kubelet[1984]: W0113 20:24:24.070310 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.070886 kubelet[1984]: E0113 20:24:24.070338 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.071449 kubelet[1984]: E0113 20:24:24.071333 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.071449 kubelet[1984]: W0113 20:24:24.071356 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.071449 kubelet[1984]: E0113 20:24:24.071391 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.071731 kubelet[1984]: E0113 20:24:24.071615 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.071731 kubelet[1984]: W0113 20:24:24.071637 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.071731 kubelet[1984]: E0113 20:24:24.071658 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.071936 kubelet[1984]: E0113 20:24:24.071831 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.071936 kubelet[1984]: W0113 20:24:24.071841 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.071936 kubelet[1984]: E0113 20:24:24.071850 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.072172 kubelet[1984]: E0113 20:24:24.071984 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.072172 kubelet[1984]: W0113 20:24:24.071992 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.072172 kubelet[1984]: E0113 20:24:24.072006 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.072172 kubelet[1984]: E0113 20:24:24.072173 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.072518 kubelet[1984]: W0113 20:24:24.072181 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.072518 kubelet[1984]: E0113 20:24:24.072195 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.072908 kubelet[1984]: E0113 20:24:24.072788 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.072908 kubelet[1984]: W0113 20:24:24.072808 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.072908 kubelet[1984]: E0113 20:24:24.072865 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.073070 kubelet[1984]: E0113 20:24:24.073051 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.073070 kubelet[1984]: W0113 20:24:24.073064 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.073264 kubelet[1984]: E0113 20:24:24.073084 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.073264 kubelet[1984]: E0113 20:24:24.073254 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.073264 kubelet[1984]: W0113 20:24:24.073264 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.073446 kubelet[1984]: E0113 20:24:24.073273 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.073446 kubelet[1984]: E0113 20:24:24.073428 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.073446 kubelet[1984]: W0113 20:24:24.073437 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.073446 kubelet[1984]: E0113 20:24:24.073446 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.073655 kubelet[1984]: E0113 20:24:24.073604 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.073655 kubelet[1984]: W0113 20:24:24.073615 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.073655 kubelet[1984]: E0113 20:24:24.073624 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.073999 kubelet[1984]: E0113 20:24:24.073979 1984 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:24.073999 kubelet[1984]: W0113 20:24:24.073994 1984 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:24.073999 kubelet[1984]: E0113 20:24:24.074006 1984 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:24.811268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674335272.mount: Deactivated successfully. Jan 13 20:24:24.890722 containerd[1480]: time="2025-01-13T20:24:24.889617215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:24.890722 containerd[1480]: time="2025-01-13T20:24:24.890668249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 13 20:24:24.891466 containerd[1480]: time="2025-01-13T20:24:24.891375859Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:24.897449 containerd[1480]: time="2025-01-13T20:24:24.896754305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:24.898178 containerd[1480]: time="2025-01-13T20:24:24.898134645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.427892915s" Jan 13 20:24:24.898178 containerd[1480]: time="2025-01-13T20:24:24.898175243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 20:24:24.902174 containerd[1480]: time="2025-01-13T20:24:24.902145470Z" level=info msg="CreateContainer within sandbox \"cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:24:24.918136 containerd[1480]: time="2025-01-13T20:24:24.918101017Z" level=info msg="CreateContainer within sandbox \"cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770\"" Jan 13 20:24:24.919090 containerd[1480]: time="2025-01-13T20:24:24.919061015Z" level=info msg="StartContainer for \"d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770\"" Jan 13 20:24:24.920221 kubelet[1984]: E0113 20:24:24.920190 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:24.955691 systemd[1]: Started cri-containerd-d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770.scope - libcontainer container d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770. Jan 13 20:24:24.990367 containerd[1480]: time="2025-01-13T20:24:24.990290598Z" level=info msg="StartContainer for \"d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770\" returns successfully" Jan 13 20:24:25.002623 kubelet[1984]: E0113 20:24:25.000932 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:25.007632 systemd[1]: cri-containerd-d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770.scope: Deactivated successfully. Jan 13 20:24:25.045151 kubelet[1984]: I0113 20:24:25.045079 1984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4kjm9" podStartSLOduration=4.651589078 podStartE2EDuration="6.045062102s" podCreationTimestamp="2025-01-13 20:24:19 +0000 UTC" firstStartedPulling="2025-01-13 20:24:22.075110141 +0000 UTC m=+4.136889889" lastFinishedPulling="2025-01-13 20:24:23.468583165 +0000 UTC m=+5.530362913" observedRunningTime="2025-01-13 20:24:24.038983197 +0000 UTC m=+6.100762985" watchObservedRunningTime="2025-01-13 20:24:25.045062102 +0000 UTC m=+7.106841850" Jan 13 20:24:25.147077 containerd[1480]: time="2025-01-13T20:24:25.146939628Z" level=info msg="shim disconnected" id=d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770 namespace=k8s.io Jan 13 20:24:25.147910 containerd[1480]: time="2025-01-13T20:24:25.147286573Z" level=warning msg="cleaning up after shim disconnected" id=d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770 namespace=k8s.io Jan 13 20:24:25.147910 containerd[1480]: time="2025-01-13T20:24:25.147304653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:25.788877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d75b73e33bb8fafaf4159f0b5412386bd1d7b853262304c3bc474a18317ed770-rootfs.mount: Deactivated successfully. Jan 13 20:24:25.920440 kubelet[1984]: E0113 20:24:25.920307 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:26.029094 containerd[1480]: time="2025-01-13T20:24:26.029054601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:24:26.921256 kubelet[1984]: E0113 20:24:26.921188 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:27.002613 kubelet[1984]: E0113 20:24:27.000737 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:27.922232 kubelet[1984]: E0113 20:24:27.922170 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:28.634576 containerd[1480]: time="2025-01-13T20:24:28.634510482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:28.636221 containerd[1480]: time="2025-01-13T20:24:28.635930831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 20:24:28.638822 containerd[1480]: time="2025-01-13T20:24:28.637613850Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:28.640152 containerd[1480]: time="2025-01-13T20:24:28.640122399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:28.640991 containerd[1480]: time="2025-01-13T20:24:28.640946009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.611507063s" Jan 13 20:24:28.640991 containerd[1480]: time="2025-01-13T20:24:28.640986528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 20:24:28.643556 containerd[1480]: time="2025-01-13T20:24:28.643491677Z" level=info msg="CreateContainer within sandbox \"cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:24:28.664972 containerd[1480]: time="2025-01-13T20:24:28.664883383Z" level=info msg="CreateContainer within sandbox \"cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76\"" Jan 13 20:24:28.667564 containerd[1480]: time="2025-01-13T20:24:28.665829789Z" level=info msg="StartContainer for \"a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76\"" Jan 13 20:24:28.699753 systemd[1]: Started cri-containerd-a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76.scope - libcontainer container a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76. Jan 13 20:24:28.736513 containerd[1480]: time="2025-01-13T20:24:28.735961891Z" level=info msg="StartContainer for \"a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76\" returns successfully" Jan 13 20:24:28.923336 kubelet[1984]: E0113 20:24:28.923287 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:29.001979 kubelet[1984]: E0113 20:24:29.001353 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:29.248726 containerd[1480]: time="2025-01-13T20:24:29.248500284Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:24:29.250915 systemd[1]: cri-containerd-a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76.scope: Deactivated successfully. Jan 13 20:24:29.273795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76-rootfs.mount: Deactivated successfully. Jan 13 20:24:29.316001 kubelet[1984]: I0113 20:24:29.315945 1984 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:24:29.421816 containerd[1480]: time="2025-01-13T20:24:29.421747907Z" level=info msg="shim disconnected" id=a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76 namespace=k8s.io Jan 13 20:24:29.421816 containerd[1480]: time="2025-01-13T20:24:29.421805105Z" level=warning msg="cleaning up after shim disconnected" id=a6a14f4e315f3296724cc11d3ed0eb93794681b75fdc28f41c65bd7c0132aa76 namespace=k8s.io Jan 13 20:24:29.421816 containerd[1480]: time="2025-01-13T20:24:29.421816625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:29.924320 kubelet[1984]: E0113 20:24:29.924263 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:30.046151 containerd[1480]: time="2025-01-13T20:24:30.045774573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:24:30.925182 kubelet[1984]: E0113 20:24:30.925121 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:31.009391 systemd[1]: Created slice kubepods-besteffort-podebb7c666_d950_48d5_86f1_1fa7d2125320.slice - libcontainer container kubepods-besteffort-podebb7c666_d950_48d5_86f1_1fa7d2125320.slice. Jan 13 20:24:31.013076 containerd[1480]: time="2025-01-13T20:24:31.013015921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:0,}" Jan 13 20:24:31.096131 containerd[1480]: time="2025-01-13T20:24:31.096026603Z" level=error msg="Failed to destroy network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:31.096131 containerd[1480]: time="2025-01-13T20:24:31.096384312Z" level=error msg="encountered an error cleaning up failed sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:31.096131 containerd[1480]: time="2025-01-13T20:24:31.096466070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:31.097995 kubelet[1984]: E0113 20:24:31.097954 1984 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:31.098083 kubelet[1984]: E0113 20:24:31.098027 1984 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:31.098083 kubelet[1984]: E0113 20:24:31.098050 1984 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:31.098147 kubelet[1984]: E0113 20:24:31.098100 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtldl_calico-system(ebb7c666-d950-48d5-86f1-1fa7d2125320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtldl_calico-system(ebb7c666-d950-48d5-86f1-1fa7d2125320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:31.098308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650-shm.mount: Deactivated successfully. Jan 13 20:24:31.925469 kubelet[1984]: E0113 20:24:31.925248 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:32.050461 kubelet[1984]: I0113 20:24:32.049948 1984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650" Jan 13 20:24:32.051132 containerd[1480]: time="2025-01-13T20:24:32.050819646Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\"" Jan 13 20:24:32.051132 containerd[1480]: time="2025-01-13T20:24:32.050979522Z" level=info msg="Ensure that sandbox 431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650 in task-service has been cleanup successfully" Jan 13 20:24:32.053001 systemd[1]: run-netns-cni\x2d8c9fc9f3\x2d301b\x2d830d\x2d98f5\x2d3c65fa21f865.mount: Deactivated successfully. Jan 13 20:24:32.054423 containerd[1480]: time="2025-01-13T20:24:32.054304943Z" level=info msg="TearDown network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" successfully" Jan 13 20:24:32.054423 containerd[1480]: time="2025-01-13T20:24:32.054359061Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" returns successfully" Jan 13 20:24:32.055282 containerd[1480]: time="2025-01-13T20:24:32.055256234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:1,}" Jan 13 20:24:32.170203 containerd[1480]: time="2025-01-13T20:24:32.170019019Z" level=error msg="Failed to destroy network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:32.172046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1-shm.mount: Deactivated successfully. Jan 13 20:24:32.173344 containerd[1480]: time="2025-01-13T20:24:32.172490466Z" level=error msg="encountered an error cleaning up failed sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:32.173344 containerd[1480]: time="2025-01-13T20:24:32.172572103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:32.173513 kubelet[1984]: E0113 20:24:32.172855 1984 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:32.173513 kubelet[1984]: E0113 20:24:32.172908 1984 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:32.173513 kubelet[1984]: E0113 20:24:32.172929 1984 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:32.173604 kubelet[1984]: E0113 20:24:32.172972 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtldl_calico-system(ebb7c666-d950-48d5-86f1-1fa7d2125320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtldl_calico-system(ebb7c666-d950-48d5-86f1-1fa7d2125320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:32.926668 kubelet[1984]: E0113 20:24:32.926604 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:33.054191 kubelet[1984]: I0113 20:24:33.054156 1984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1" Jan 13 20:24:33.055140 containerd[1480]: time="2025-01-13T20:24:33.054979523Z" level=info msg="StopPodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\"" Jan 13 20:24:33.055236 containerd[1480]: time="2025-01-13T20:24:33.055142999Z" level=info msg="Ensure that sandbox aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1 in task-service has been cleanup successfully" Jan 13 20:24:33.057648 containerd[1480]: time="2025-01-13T20:24:33.056993586Z" level=info msg="TearDown network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" successfully" Jan 13 20:24:33.057648 containerd[1480]: time="2025-01-13T20:24:33.057022866Z" level=info msg="StopPodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" returns successfully" Jan 13 20:24:33.057648 containerd[1480]: time="2025-01-13T20:24:33.057274618Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\"" Jan 13 20:24:33.057648 containerd[1480]: time="2025-01-13T20:24:33.057353616Z" level=info msg="TearDown network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" successfully" Jan 13 20:24:33.057648 containerd[1480]: time="2025-01-13T20:24:33.057362656Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" returns successfully" Jan 13 20:24:33.057394 systemd[1]: run-netns-cni\x2d6337a369\x2db115\x2d47f7\x2d9bc8\x2de16db1991298.mount: Deactivated successfully. Jan 13 20:24:33.059794 containerd[1480]: time="2025-01-13T20:24:33.059588633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:2,}" Jan 13 20:24:33.155899 containerd[1480]: time="2025-01-13T20:24:33.155686236Z" level=error msg="Failed to destroy network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:33.158303 containerd[1480]: time="2025-01-13T20:24:33.156730806Z" level=error msg="encountered an error cleaning up failed sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:33.158303 containerd[1480]: time="2025-01-13T20:24:33.156821524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:33.158126 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b-shm.mount: Deactivated successfully. Jan 13 20:24:33.158565 kubelet[1984]: E0113 20:24:33.157062 1984 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:33.158565 kubelet[1984]: E0113 20:24:33.157122 1984 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:33.158565 kubelet[1984]: E0113 20:24:33.157143 1984 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:33.158654 kubelet[1984]: E0113 20:24:33.157180 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtldl_calico-system(ebb7c666-d950-48d5-86f1-1fa7d2125320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtldl_calico-system(ebb7c666-d950-48d5-86f1-1fa7d2125320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:33.927685 kubelet[1984]: E0113 20:24:33.927623 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:34.008239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925852655.mount: Deactivated successfully. Jan 13 20:24:34.046708 containerd[1480]: time="2025-01-13T20:24:34.046646548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:34.047967 containerd[1480]: time="2025-01-13T20:24:34.047924273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 20:24:34.048926 containerd[1480]: time="2025-01-13T20:24:34.048898767Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:34.051694 containerd[1480]: time="2025-01-13T20:24:34.051667653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:34.052909 containerd[1480]: time="2025-01-13T20:24:34.052884380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.00705117s" Jan 13 20:24:34.053014 containerd[1480]: time="2025-01-13T20:24:34.052999177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 20:24:34.056708 kubelet[1984]: I0113 20:24:34.056687 1984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b" Jan 13 20:24:34.057345 containerd[1480]: time="2025-01-13T20:24:34.057319141Z" level=info msg="StopPodSandbox for \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\"" Jan 13 20:24:34.057610 containerd[1480]: time="2025-01-13T20:24:34.057589654Z" level=info msg="Ensure that sandbox fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b in task-service has been cleanup successfully" Jan 13 20:24:34.057899 containerd[1480]: time="2025-01-13T20:24:34.057878366Z" level=info msg="TearDown network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\" successfully" Jan 13 20:24:34.058015 containerd[1480]: time="2025-01-13T20:24:34.057998763Z" level=info msg="StopPodSandbox for \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\" returns successfully" Jan 13 20:24:34.062252 containerd[1480]: time="2025-01-13T20:24:34.061658225Z" level=info msg="StopPodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\"" Jan 13 20:24:34.062252 containerd[1480]: time="2025-01-13T20:24:34.061751422Z" level=info msg="TearDown network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" successfully" Jan 13 20:24:34.062252 containerd[1480]: time="2025-01-13T20:24:34.061761302Z" level=info msg="StopPodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" returns successfully" Jan 13 20:24:34.061909 systemd[1]: run-netns-cni\x2dd7e82a87\x2d8a5d\x2db6d5\x2da711\x2d8258539d9d09.mount: Deactivated successfully. Jan 13 20:24:34.066144 containerd[1480]: time="2025-01-13T20:24:34.064745862Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\"" Jan 13 20:24:34.066144 containerd[1480]: time="2025-01-13T20:24:34.064831900Z" level=info msg="TearDown network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" successfully" Jan 13 20:24:34.066144 containerd[1480]: time="2025-01-13T20:24:34.064841699Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" returns successfully" Jan 13 20:24:34.067072 containerd[1480]: time="2025-01-13T20:24:34.066927203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:3,}" Jan 13 20:24:34.068553 containerd[1480]: time="2025-01-13T20:24:34.068323246Z" level=info msg="CreateContainer within sandbox \"cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:24:34.091994 containerd[1480]: time="2025-01-13T20:24:34.091952652Z" level=info msg="CreateContainer within sandbox \"cc685cf5f68e5bb7091d6b483c8120515678518dae6500b1dc80eb1de4a0d3ba\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d3ac87d3c86487b7f6f595f67d7ac812303ffb574a6ffd23367cc1847db9c914\"" Jan 13 20:24:34.094544 containerd[1480]: time="2025-01-13T20:24:34.093689005Z" level=info msg="StartContainer for \"d3ac87d3c86487b7f6f595f67d7ac812303ffb574a6ffd23367cc1847db9c914\"" Jan 13 20:24:34.124779 systemd[1]: Started cri-containerd-d3ac87d3c86487b7f6f595f67d7ac812303ffb574a6ffd23367cc1847db9c914.scope - libcontainer container d3ac87d3c86487b7f6f595f67d7ac812303ffb574a6ffd23367cc1847db9c914. Jan 13 20:24:34.163153 containerd[1480]: time="2025-01-13T20:24:34.163111382Z" level=info msg="StartContainer for \"d3ac87d3c86487b7f6f595f67d7ac812303ffb574a6ffd23367cc1847db9c914\" returns successfully" Jan 13 20:24:34.163360 containerd[1480]: time="2025-01-13T20:24:34.163172860Z" level=error msg="Failed to destroy network for sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:34.164246 containerd[1480]: time="2025-01-13T20:24:34.164218752Z" level=error msg="encountered an error cleaning up failed sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:34.164396 containerd[1480]: time="2025-01-13T20:24:34.164375228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:34.164800 kubelet[1984]: E0113 20:24:34.164606 1984 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:34.164800 kubelet[1984]: E0113 20:24:34.164652 1984 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:34.164800 kubelet[1984]: E0113 20:24:34.164673 1984 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dtldl" Jan 13 20:24:34.164937 kubelet[1984]: E0113 20:24:34.164743 1984 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dtldl_calico-system(ebb7c666-d950-48d5-86f1-1fa7d2125320)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dtldl_calico-system(ebb7c666-d950-48d5-86f1-1fa7d2125320)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dtldl" podUID="ebb7c666-d950-48d5-86f1-1fa7d2125320" Jan 13 20:24:34.265901 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:24:34.266082 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:24:34.928086 kubelet[1984]: E0113 20:24:34.928042 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:35.065149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f-shm.mount: Deactivated successfully. Jan 13 20:24:35.068469 kubelet[1984]: I0113 20:24:35.066815 1984 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f" Jan 13 20:24:35.068651 containerd[1480]: time="2025-01-13T20:24:35.067901070Z" level=info msg="StopPodSandbox for \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\"" Jan 13 20:24:35.068651 containerd[1480]: time="2025-01-13T20:24:35.068105825Z" level=info msg="Ensure that sandbox db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f in task-service has been cleanup successfully" Jan 13 20:24:35.068651 containerd[1480]: time="2025-01-13T20:24:35.068526534Z" level=info msg="TearDown network for sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\" successfully" Jan 13 20:24:35.068651 containerd[1480]: time="2025-01-13T20:24:35.068546813Z" level=info msg="StopPodSandbox for \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\" returns successfully" Jan 13 20:24:35.069853 systemd[1]: run-netns-cni\x2de427db55\x2dfa27\x2daec2\x2dc477\x2ddf1b3bc328be.mount: Deactivated successfully. Jan 13 20:24:35.070827 containerd[1480]: time="2025-01-13T20:24:35.070789516Z" level=info msg="StopPodSandbox for \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\"" Jan 13 20:24:35.070915 containerd[1480]: time="2025-01-13T20:24:35.070891234Z" level=info msg="TearDown network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\" successfully" Jan 13 20:24:35.070915 containerd[1480]: time="2025-01-13T20:24:35.070905433Z" level=info msg="StopPodSandbox for \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\" returns successfully" Jan 13 20:24:35.071574 containerd[1480]: time="2025-01-13T20:24:35.071431020Z" level=info msg="StopPodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\"" Jan 13 20:24:35.071574 containerd[1480]: time="2025-01-13T20:24:35.071518738Z" level=info msg="TearDown network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" successfully" Jan 13 20:24:35.071574 containerd[1480]: time="2025-01-13T20:24:35.071528537Z" level=info msg="StopPodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" returns successfully" Jan 13 20:24:35.072122 containerd[1480]: time="2025-01-13T20:24:35.072015165Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\"" Jan 13 20:24:35.072277 containerd[1480]: time="2025-01-13T20:24:35.072102843Z" level=info msg="TearDown network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" successfully" Jan 13 20:24:35.072277 containerd[1480]: time="2025-01-13T20:24:35.072200720Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" returns successfully" Jan 13 20:24:35.072955 containerd[1480]: time="2025-01-13T20:24:35.072927902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:4,}" Jan 13 20:24:35.102966 kubelet[1984]: I0113 20:24:35.102615 1984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-w7cjd" podStartSLOduration=4.123922212 podStartE2EDuration="16.102595067s" podCreationTimestamp="2025-01-13 20:24:19 +0000 UTC" firstStartedPulling="2025-01-13 20:24:22.075088582 +0000 UTC m=+4.136868330" lastFinishedPulling="2025-01-13 20:24:34.053761477 +0000 UTC m=+16.115541185" observedRunningTime="2025-01-13 20:24:35.101980242 +0000 UTC m=+17.163760030" watchObservedRunningTime="2025-01-13 20:24:35.102595067 +0000 UTC m=+17.164374815" Jan 13 20:24:35.280690 systemd-networkd[1377]: caliaf917c565e3: Link UP Jan 13 20:24:35.280897 systemd-networkd[1377]: caliaf917c565e3: Gained carrier Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.112 [INFO][2684] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.138 [INFO][2684] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--dtldl-eth0 csi-node-driver- calico-system ebb7c666-d950-48d5-86f1-1fa7d2125320 1631 0 2025-01-13 20:24:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-dtldl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliaf917c565e3 [] []}} ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Namespace="calico-system" Pod="csi-node-driver-dtldl" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dtldl-" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.138 [INFO][2684] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Namespace="calico-system" Pod="csi-node-driver-dtldl" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.189 [INFO][2696] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" HandleID="k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Workload="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.210 [INFO][2696] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" HandleID="k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Workload="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c840), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-dtldl", "timestamp":"2025-01-13 20:24:35.189206783 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.210 [INFO][2696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.210 [INFO][2696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.210 [INFO][2696] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.215 [INFO][2696] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.223 [INFO][2696] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.233 [INFO][2696] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.237 [INFO][2696] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.242 [INFO][2696] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.242 [INFO][2696] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.245 [INFO][2696] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2 Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.255 [INFO][2696] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.265 [INFO][2696] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.265 [INFO][2696] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" host="10.0.0.4" Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.265 [INFO][2696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:24:35.303296 containerd[1480]: 2025-01-13 20:24:35.265 [INFO][2696] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" HandleID="k8s-pod-network.e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Workload="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" Jan 13 20:24:35.304548 containerd[1480]: 2025-01-13 20:24:35.270 [INFO][2684] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Namespace="calico-system" Pod="csi-node-driver-dtldl" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--dtldl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebb7c666-d950-48d5-86f1-1fa7d2125320", ResourceVersion:"1631", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-dtldl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf917c565e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:35.304548 containerd[1480]: 2025-01-13 20:24:35.270 [INFO][2684] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Namespace="calico-system" Pod="csi-node-driver-dtldl" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" Jan 13 20:24:35.304548 containerd[1480]: 2025-01-13 20:24:35.270 [INFO][2684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf917c565e3 ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Namespace="calico-system" Pod="csi-node-driver-dtldl" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" Jan 13 20:24:35.304548 containerd[1480]: 2025-01-13 20:24:35.280 [INFO][2684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Namespace="calico-system" Pod="csi-node-driver-dtldl" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" Jan 13 20:24:35.304548 containerd[1480]: 2025-01-13 20:24:35.280 [INFO][2684] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Namespace="calico-system" Pod="csi-node-driver-dtldl" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--dtldl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ebb7c666-d950-48d5-86f1-1fa7d2125320", ResourceVersion:"1631", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2", Pod:"csi-node-driver-dtldl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliaf917c565e3", MAC:"f2:fc:08:71:a9:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:35.304548 containerd[1480]: 2025-01-13 20:24:35.300 [INFO][2684] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2" Namespace="calico-system" Pod="csi-node-driver-dtldl" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--dtldl-eth0" Jan 13 20:24:35.324080 containerd[1480]: time="2025-01-13T20:24:35.323742359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:35.324080 containerd[1480]: time="2025-01-13T20:24:35.323803157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:35.324080 containerd[1480]: time="2025-01-13T20:24:35.323819797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:35.324858 containerd[1480]: time="2025-01-13T20:24:35.324690615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:35.351781 systemd[1]: Started cri-containerd-e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2.scope - libcontainer container e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2. Jan 13 20:24:35.379192 containerd[1480]: time="2025-01-13T20:24:35.379083431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dtldl,Uid:ebb7c666-d950-48d5-86f1-1fa7d2125320,Namespace:calico-system,Attempt:4,} returns sandbox id \"e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2\"" Jan 13 20:24:35.382031 containerd[1480]: time="2025-01-13T20:24:35.381996516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:24:35.928945 kubelet[1984]: E0113 20:24:35.928688 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:35.929435 kernel: bpftool[2871]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:24:36.081374 kubelet[1984]: I0113 20:24:36.081291 1984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:24:36.118792 systemd-networkd[1377]: vxlan.calico: Link UP Jan 13 20:24:36.118799 systemd-networkd[1377]: vxlan.calico: Gained carrier Jan 13 20:24:36.367817 systemd-networkd[1377]: caliaf917c565e3: Gained IPv6LL Jan 13 20:24:36.732481 containerd[1480]: time="2025-01-13T20:24:36.732422455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:36.733867 containerd[1480]: time="2025-01-13T20:24:36.733811501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 20:24:36.734867 containerd[1480]: time="2025-01-13T20:24:36.734822997Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:36.736980 containerd[1480]: time="2025-01-13T20:24:36.736938226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:36.737874 containerd[1480]: time="2025-01-13T20:24:36.737823045Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.355793729s" Jan 13 20:24:36.737874 containerd[1480]: time="2025-01-13T20:24:36.737858364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 20:24:36.740301 containerd[1480]: time="2025-01-13T20:24:36.740268346Z" level=info msg="CreateContainer within sandbox \"e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:24:36.756717 containerd[1480]: time="2025-01-13T20:24:36.756602072Z" level=info msg="CreateContainer within sandbox \"e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"16d5bc381023aae02106d34a1a9083ecc0d70dbcd13ef4598e55ad3ef754bef4\"" Jan 13 20:24:36.757282 containerd[1480]: time="2025-01-13T20:24:36.757250057Z" level=info msg="StartContainer for \"16d5bc381023aae02106d34a1a9083ecc0d70dbcd13ef4598e55ad3ef754bef4\"" Jan 13 20:24:36.795608 systemd[1]: Started cri-containerd-16d5bc381023aae02106d34a1a9083ecc0d70dbcd13ef4598e55ad3ef754bef4.scope - libcontainer container 16d5bc381023aae02106d34a1a9083ecc0d70dbcd13ef4598e55ad3ef754bef4. Jan 13 20:24:36.828050 containerd[1480]: time="2025-01-13T20:24:36.827771557Z" level=info msg="StartContainer for \"16d5bc381023aae02106d34a1a9083ecc0d70dbcd13ef4598e55ad3ef754bef4\" returns successfully" Jan 13 20:24:36.829676 containerd[1480]: time="2025-01-13T20:24:36.829636552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:24:36.929767 kubelet[1984]: E0113 20:24:36.929701 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:37.430550 systemd[1]: Created slice kubepods-besteffort-pod261de1ad_5b02_41f0_a081_792fef3281f5.slice - libcontainer container kubepods-besteffort-pod261de1ad_5b02_41f0_a081_792fef3281f5.slice. Jan 13 20:24:37.455763 systemd-networkd[1377]: vxlan.calico: Gained IPv6LL Jan 13 20:24:37.557561 kubelet[1984]: I0113 20:24:37.557478 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lhwp\" (UniqueName: \"kubernetes.io/projected/261de1ad-5b02-41f0-a081-792fef3281f5-kube-api-access-4lhwp\") pod \"nginx-deployment-8587fbcb89-sj9gw\" (UID: \"261de1ad-5b02-41f0-a081-792fef3281f5\") " pod="default/nginx-deployment-8587fbcb89-sj9gw" Jan 13 20:24:37.735503 containerd[1480]: time="2025-01-13T20:24:37.734753735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-sj9gw,Uid:261de1ad-5b02-41f0-a081-792fef3281f5,Namespace:default,Attempt:0,}" Jan 13 20:24:37.911472 systemd-networkd[1377]: cali1d87ee43101: Link UP Jan 13 20:24:37.911784 systemd-networkd[1377]: cali1d87ee43101: Gained carrier Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.793 [INFO][2986] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0 nginx-deployment-8587fbcb89- default 261de1ad-5b02-41f0-a081-792fef3281f5 1738 0 2025-01-13 20:24:37 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-8587fbcb89-sj9gw eth0 default [] [] [kns.default ksa.default.default] cali1d87ee43101 [] []}} ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Namespace="default" Pod="nginx-deployment-8587fbcb89-sj9gw" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.793 [INFO][2986] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Namespace="default" Pod="nginx-deployment-8587fbcb89-sj9gw" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.826 [INFO][2997] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" HandleID="k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.852 [INFO][2997] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" HandleID="k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004ceb30), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-8587fbcb89-sj9gw", "timestamp":"2025-01-13 20:24:37.826860555 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.852 [INFO][2997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.852 [INFO][2997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.852 [INFO][2997] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.856 [INFO][2997] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.865 [INFO][2997] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.874 [INFO][2997] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.878 [INFO][2997] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.883 [INFO][2997] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.883 [INFO][2997] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.886 [INFO][2997] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5 Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.893 [INFO][2997] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.904 [INFO][2997] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.904 [INFO][2997] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" host="10.0.0.4" Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.904 [INFO][2997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:24:37.925661 containerd[1480]: 2025-01-13 20:24:37.904 [INFO][2997] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" HandleID="k8s-pod-network.65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Workload="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" Jan 13 20:24:37.927597 containerd[1480]: 2025-01-13 20:24:37.907 [INFO][2986] cni-plugin/k8s.go 386: Populated endpoint ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Namespace="default" Pod="nginx-deployment-8587fbcb89-sj9gw" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"261de1ad-5b02-41f0-a081-792fef3281f5", ResourceVersion:"1738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-sj9gw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1d87ee43101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:37.927597 containerd[1480]: 2025-01-13 20:24:37.907 [INFO][2986] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Namespace="default" Pod="nginx-deployment-8587fbcb89-sj9gw" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" Jan 13 20:24:37.927597 containerd[1480]: 2025-01-13 20:24:37.907 [INFO][2986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d87ee43101 ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Namespace="default" Pod="nginx-deployment-8587fbcb89-sj9gw" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" Jan 13 20:24:37.927597 containerd[1480]: 2025-01-13 20:24:37.911 [INFO][2986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Namespace="default" Pod="nginx-deployment-8587fbcb89-sj9gw" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" Jan 13 20:24:37.927597 containerd[1480]: 2025-01-13 20:24:37.911 [INFO][2986] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Namespace="default" Pod="nginx-deployment-8587fbcb89-sj9gw" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"261de1ad-5b02-41f0-a081-792fef3281f5", ResourceVersion:"1738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5", Pod:"nginx-deployment-8587fbcb89-sj9gw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1d87ee43101", MAC:"9e:8c:d6:77:45:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:37.927597 containerd[1480]: 2025-01-13 20:24:37.924 [INFO][2986] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5" Namespace="default" Pod="nginx-deployment-8587fbcb89-sj9gw" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--8587fbcb89--sj9gw-eth0" Jan 13 20:24:37.931316 kubelet[1984]: E0113 20:24:37.931217 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:37.947802 containerd[1480]: time="2025-01-13T20:24:37.947495606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:37.947802 containerd[1480]: time="2025-01-13T20:24:37.947581364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:37.947802 containerd[1480]: time="2025-01-13T20:24:37.947611083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:37.947802 containerd[1480]: time="2025-01-13T20:24:37.947732920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:37.975695 systemd[1]: Started cri-containerd-65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5.scope - libcontainer container 65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5. Jan 13 20:24:38.021934 containerd[1480]: time="2025-01-13T20:24:38.021753379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-sj9gw,Uid:261de1ad-5b02-41f0-a081-792fef3281f5,Namespace:default,Attempt:0,} returns sandbox id \"65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5\"" Jan 13 20:24:38.182834 containerd[1480]: time="2025-01-13T20:24:38.182052527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:38.182834 containerd[1480]: time="2025-01-13T20:24:38.182792711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 20:24:38.183902 containerd[1480]: time="2025-01-13T20:24:38.183792650Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:38.186619 containerd[1480]: time="2025-01-13T20:24:38.186493232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:38.188403 containerd[1480]: time="2025-01-13T20:24:38.187342333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.357677343s" Jan 13 20:24:38.188403 containerd[1480]: time="2025-01-13T20:24:38.187381492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 20:24:38.190061 containerd[1480]: time="2025-01-13T20:24:38.190026276Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:24:38.192541 containerd[1480]: time="2025-01-13T20:24:38.192517262Z" level=info msg="CreateContainer within sandbox \"e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:24:38.213238 containerd[1480]: time="2025-01-13T20:24:38.213182497Z" level=info msg="CreateContainer within sandbox \"e87525848b749dbaeaaa01c177d8b9a49023e23b203202ce56c21cfb831da6b2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"45ddef54261aa1f1cfea8e92a16c8af9e0997d8b0e7cbcc3a9ca42b36d00ec9f\"" Jan 13 20:24:38.214618 containerd[1480]: time="2025-01-13T20:24:38.214538468Z" level=info msg="StartContainer for \"45ddef54261aa1f1cfea8e92a16c8af9e0997d8b0e7cbcc3a9ca42b36d00ec9f\"" Jan 13 20:24:38.243586 systemd[1]: Started cri-containerd-45ddef54261aa1f1cfea8e92a16c8af9e0997d8b0e7cbcc3a9ca42b36d00ec9f.scope - libcontainer container 45ddef54261aa1f1cfea8e92a16c8af9e0997d8b0e7cbcc3a9ca42b36d00ec9f. Jan 13 20:24:38.276117 containerd[1480]: time="2025-01-13T20:24:38.275944426Z" level=info msg="StartContainer for \"45ddef54261aa1f1cfea8e92a16c8af9e0997d8b0e7cbcc3a9ca42b36d00ec9f\" returns successfully" Jan 13 20:24:38.915803 kubelet[1984]: E0113 20:24:38.915763 1984 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:38.932460 kubelet[1984]: E0113 20:24:38.932374 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:39.051356 kubelet[1984]: I0113 20:24:39.050882 1984 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:24:39.051356 kubelet[1984]: I0113 20:24:39.050915 1984 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:24:39.121502 kubelet[1984]: I0113 20:24:39.121443 1984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dtldl" podStartSLOduration=17.313130859 podStartE2EDuration="20.121425369s" podCreationTimestamp="2025-01-13 20:24:19 +0000 UTC" firstStartedPulling="2025-01-13 20:24:35.381239056 +0000 UTC m=+17.443018804" lastFinishedPulling="2025-01-13 20:24:38.189533486 +0000 UTC m=+20.251313314" observedRunningTime="2025-01-13 20:24:39.119120536 +0000 UTC m=+21.180900324" watchObservedRunningTime="2025-01-13 20:24:39.121425369 +0000 UTC m=+21.183205117" Jan 13 20:24:39.696144 systemd-networkd[1377]: cali1d87ee43101: Gained IPv6LL Jan 13 20:24:39.933282 kubelet[1984]: E0113 20:24:39.933087 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:40.379420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547580459.mount: Deactivated successfully. Jan 13 20:24:40.934181 kubelet[1984]: E0113 20:24:40.934121 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:41.147217 containerd[1480]: time="2025-01-13T20:24:41.147126644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:41.149654 containerd[1480]: time="2025-01-13T20:24:41.149593760Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 20:24:41.150601 containerd[1480]: time="2025-01-13T20:24:41.150553062Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:41.158447 containerd[1480]: time="2025-01-13T20:24:41.158354002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:41.160700 containerd[1480]: time="2025-01-13T20:24:41.160384446Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 2.970315971s" Jan 13 20:24:41.160700 containerd[1480]: time="2025-01-13T20:24:41.160539803Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:24:41.163468 containerd[1480]: time="2025-01-13T20:24:41.162982279Z" level=info msg="CreateContainer within sandbox \"65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:24:41.177371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535644086.mount: Deactivated successfully. Jan 13 20:24:41.187163 containerd[1480]: time="2025-01-13T20:24:41.187050807Z" level=info msg="CreateContainer within sandbox \"65397b4b82441cc344b83c35d9925c7b08ba74ac6cbce55a19c944f98bf80fe5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"8520ddb428d3cbb3bd447e1a396d2da809045fd53ab4e92a86768dde593bfcc0\"" Jan 13 20:24:41.188387 containerd[1480]: time="2025-01-13T20:24:41.188340983Z" level=info msg="StartContainer for \"8520ddb428d3cbb3bd447e1a396d2da809045fd53ab4e92a86768dde593bfcc0\"" Jan 13 20:24:41.220789 systemd[1]: Started cri-containerd-8520ddb428d3cbb3bd447e1a396d2da809045fd53ab4e92a86768dde593bfcc0.scope - libcontainer container 8520ddb428d3cbb3bd447e1a396d2da809045fd53ab4e92a86768dde593bfcc0. Jan 13 20:24:41.246517 containerd[1480]: time="2025-01-13T20:24:41.246476699Z" level=info msg="StartContainer for \"8520ddb428d3cbb3bd447e1a396d2da809045fd53ab4e92a86768dde593bfcc0\" returns successfully" Jan 13 20:24:41.934373 kubelet[1984]: E0113 20:24:41.934296 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:42.131247 kubelet[1984]: I0113 20:24:42.131114 1984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-sj9gw" podStartSLOduration=1.99348861 podStartE2EDuration="5.131091144s" podCreationTimestamp="2025-01-13 20:24:37 +0000 UTC" firstStartedPulling="2025-01-13 20:24:38.024135287 +0000 UTC m=+20.085915035" lastFinishedPulling="2025-01-13 20:24:41.161737821 +0000 UTC m=+23.223517569" observedRunningTime="2025-01-13 20:24:42.13074275 +0000 UTC m=+24.192522578" watchObservedRunningTime="2025-01-13 20:24:42.131091144 +0000 UTC m=+24.192870932" Jan 13 20:24:42.934666 kubelet[1984]: E0113 20:24:42.934557 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:43.935297 kubelet[1984]: E0113 20:24:43.935219 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:44.935544 kubelet[1984]: E0113 20:24:44.935452 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:45.936736 kubelet[1984]: E0113 20:24:45.936651 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:46.937589 kubelet[1984]: E0113 20:24:46.937509 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:47.938009 kubelet[1984]: E0113 20:24:47.937913 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:48.938777 kubelet[1984]: E0113 20:24:48.938714 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:49.939783 kubelet[1984]: E0113 20:24:49.939587 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:50.432512 kubelet[1984]: I0113 20:24:50.432395 1984 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:24:50.940513 kubelet[1984]: E0113 20:24:50.940454 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:51.941076 kubelet[1984]: E0113 20:24:51.940995 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:52.703548 systemd[1]: Created slice kubepods-besteffort-pod4ac9e8c0_ecf7_4858_a14d_d447605b245a.slice - libcontainer container kubepods-besteffort-pod4ac9e8c0_ecf7_4858_a14d_d447605b245a.slice. Jan 13 20:24:52.850608 kubelet[1984]: I0113 20:24:52.850452 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4ac9e8c0-ecf7-4858-a14d-d447605b245a-data\") pod \"nfs-server-provisioner-0\" (UID: \"4ac9e8c0-ecf7-4858-a14d-d447605b245a\") " pod="default/nfs-server-provisioner-0" Jan 13 20:24:52.850608 kubelet[1984]: I0113 20:24:52.850528 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfpc\" (UniqueName: \"kubernetes.io/projected/4ac9e8c0-ecf7-4858-a14d-d447605b245a-kube-api-access-6tfpc\") pod \"nfs-server-provisioner-0\" (UID: \"4ac9e8c0-ecf7-4858-a14d-d447605b245a\") " pod="default/nfs-server-provisioner-0" Jan 13 20:24:52.941573 kubelet[1984]: E0113 20:24:52.941495 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:53.007324 containerd[1480]: time="2025-01-13T20:24:53.007151043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4ac9e8c0-ecf7-4858-a14d-d447605b245a,Namespace:default,Attempt:0,}" Jan 13 20:24:53.194752 systemd-networkd[1377]: cali60e51b789ff: Link UP Jan 13 20:24:53.195209 systemd-networkd[1377]: cali60e51b789ff: Gained carrier Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.070 [INFO][3249] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 4ac9e8c0-ecf7-4858-a14d-d447605b245a 1811 0 2025-01-13 20:24:52 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.070 [INFO][3249] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.110 [INFO][3259] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" HandleID="k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.129 [INFO][3259] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" HandleID="k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cae0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:24:53.110145035 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.129 [INFO][3259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.130 [INFO][3259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.130 [INFO][3259] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.137 [INFO][3259] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.145 [INFO][3259] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.153 [INFO][3259] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.158 [INFO][3259] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.163 [INFO][3259] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.163 [INFO][3259] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.170 [INFO][3259] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.179 [INFO][3259] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.188 [INFO][3259] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.188 [INFO][3259] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" host="10.0.0.4" Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.189 [INFO][3259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:24:53.213895 containerd[1480]: 2025-01-13 20:24:53.189 [INFO][3259] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" HandleID="k8s-pod-network.eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:53.216096 containerd[1480]: 2025-01-13 20:24:53.191 [INFO][3249] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4ac9e8c0-ecf7-4858-a14d-d447605b245a", ResourceVersion:"1811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:53.216096 containerd[1480]: 2025-01-13 20:24:53.192 [INFO][3249] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:53.216096 containerd[1480]: 2025-01-13 20:24:53.192 [INFO][3249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:53.216096 containerd[1480]: 2025-01-13 20:24:53.194 [INFO][3249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:53.216239 containerd[1480]: 2025-01-13 20:24:53.195 [INFO][3249] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"4ac9e8c0-ecf7-4858-a14d-d447605b245a", ResourceVersion:"1811", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"12:35:4b:48:70:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:53.216239 containerd[1480]: 2025-01-13 20:24:53.211 [INFO][3249] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:53.236477 containerd[1480]: time="2025-01-13T20:24:53.235922956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:53.236477 containerd[1480]: time="2025-01-13T20:24:53.236098475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:53.236850 containerd[1480]: time="2025-01-13T20:24:53.236694991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:53.236850 containerd[1480]: time="2025-01-13T20:24:53.236789350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:53.260750 systemd[1]: Started cri-containerd-eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf.scope - libcontainer container eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf. Jan 13 20:24:53.292395 containerd[1480]: time="2025-01-13T20:24:53.292290019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4ac9e8c0-ecf7-4858-a14d-d447605b245a,Namespace:default,Attempt:0,} returns sandbox id \"eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf\"" Jan 13 20:24:53.294494 containerd[1480]: time="2025-01-13T20:24:53.294194807Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:24:53.942393 kubelet[1984]: E0113 20:24:53.941772 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:54.864057 systemd-networkd[1377]: cali60e51b789ff: Gained IPv6LL Jan 13 20:24:54.942957 kubelet[1984]: E0113 20:24:54.942618 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:54.968778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount340194934.mount: Deactivated successfully. Jan 13 20:24:55.943575 kubelet[1984]: E0113 20:24:55.943174 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:56.506526 containerd[1480]: time="2025-01-13T20:24:56.505763224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:56.507708 containerd[1480]: time="2025-01-13T20:24:56.507673055Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373691" Jan 13 20:24:56.508703 containerd[1480]: time="2025-01-13T20:24:56.508648851Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:56.512832 containerd[1480]: time="2025-01-13T20:24:56.512791353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:56.514009 containerd[1480]: time="2025-01-13T20:24:56.513974587Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.2197321s" Jan 13 20:24:56.514072 containerd[1480]: time="2025-01-13T20:24:56.514007547Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 20:24:56.517153 containerd[1480]: time="2025-01-13T20:24:56.517022694Z" level=info msg="CreateContainer within sandbox \"eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:24:56.536105 containerd[1480]: time="2025-01-13T20:24:56.536060889Z" level=info msg="CreateContainer within sandbox \"eea4fd2e5d8d5e14bf511694dc54b86cba471faf8d43d930affa41ed263605bf\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c5283d1c524c842367acb7316e022b47c8c40099001081f25c55f7bd39af0d66\"" Jan 13 20:24:56.538200 containerd[1480]: time="2025-01-13T20:24:56.537140004Z" level=info msg="StartContainer for \"c5283d1c524c842367acb7316e022b47c8c40099001081f25c55f7bd39af0d66\"" Jan 13 20:24:56.566672 systemd[1]: Started cri-containerd-c5283d1c524c842367acb7316e022b47c8c40099001081f25c55f7bd39af0d66.scope - libcontainer container c5283d1c524c842367acb7316e022b47c8c40099001081f25c55f7bd39af0d66. Jan 13 20:24:56.599303 containerd[1480]: time="2025-01-13T20:24:56.599236407Z" level=info msg="StartContainer for \"c5283d1c524c842367acb7316e022b47c8c40099001081f25c55f7bd39af0d66\" returns successfully" Jan 13 20:24:56.943577 kubelet[1984]: E0113 20:24:56.943523 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:57.527324 systemd[1]: run-containerd-runc-k8s.io-c5283d1c524c842367acb7316e022b47c8c40099001081f25c55f7bd39af0d66-runc.WYDubf.mount: Deactivated successfully. Jan 13 20:24:57.944883 kubelet[1984]: E0113 20:24:57.944804 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:58.916291 kubelet[1984]: E0113 20:24:58.916201 1984 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:58.946017 kubelet[1984]: E0113 20:24:58.945920 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:59.946429 kubelet[1984]: E0113 20:24:59.946351 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:00.947621 kubelet[1984]: E0113 20:25:00.947523 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:01.947968 kubelet[1984]: E0113 20:25:01.947798 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:02.948457 kubelet[1984]: E0113 20:25:02.948365 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:03.948961 kubelet[1984]: E0113 20:25:03.948884 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:04.950124 kubelet[1984]: E0113 20:25:04.950028 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:05.950587 kubelet[1984]: E0113 20:25:05.950507 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:06.142722 kubelet[1984]: I0113 20:25:06.141933 1984 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.9204614 podStartE2EDuration="14.141883013s" podCreationTimestamp="2025-01-13 20:24:52 +0000 UTC" firstStartedPulling="2025-01-13 20:24:53.293976928 +0000 UTC m=+35.355756676" lastFinishedPulling="2025-01-13 20:24:56.515398541 +0000 UTC m=+38.577178289" observedRunningTime="2025-01-13 20:24:57.170292055 +0000 UTC m=+39.232071843" watchObservedRunningTime="2025-01-13 20:25:06.141883013 +0000 UTC m=+48.203662801" Jan 13 20:25:06.148400 systemd[1]: Created slice kubepods-besteffort-podbfa558bf_5f60_4476_8312_5613896eb023.slice - libcontainer container kubepods-besteffort-podbfa558bf_5f60_4476_8312_5613896eb023.slice. Jan 13 20:25:06.332527 kubelet[1984]: I0113 20:25:06.332329 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j5mp\" (UniqueName: \"kubernetes.io/projected/bfa558bf-5f60-4476-8312-5613896eb023-kube-api-access-4j5mp\") pod \"test-pod-1\" (UID: \"bfa558bf-5f60-4476-8312-5613896eb023\") " pod="default/test-pod-1" Jan 13 20:25:06.332527 kubelet[1984]: I0113 20:25:06.332438 1984 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9a69a3cb-7940-4991-935c-2ec6d1a24b48\" (UniqueName: \"kubernetes.io/nfs/bfa558bf-5f60-4476-8312-5613896eb023-pvc-9a69a3cb-7940-4991-935c-2ec6d1a24b48\") pod \"test-pod-1\" (UID: \"bfa558bf-5f60-4476-8312-5613896eb023\") " pod="default/test-pod-1" Jan 13 20:25:06.456871 kernel: FS-Cache: Loaded Jan 13 20:25:06.484169 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:25:06.484371 kernel: RPC: Registered udp transport module. Jan 13 20:25:06.484554 kernel: RPC: Registered tcp transport module. Jan 13 20:25:06.484698 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:25:06.484917 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:25:06.648451 kernel: NFS: Registering the id_resolver key type Jan 13 20:25:06.648547 kernel: Key type id_resolver registered Jan 13 20:25:06.648566 kernel: Key type id_legacy registered Jan 13 20:25:06.672826 nfsidmap[3449]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:25:06.675450 nfsidmap[3450]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:25:06.753776 containerd[1480]: time="2025-01-13T20:25:06.753687527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bfa558bf-5f60-4476-8312-5613896eb023,Namespace:default,Attempt:0,}" Jan 13 20:25:06.940921 systemd-networkd[1377]: cali5ec59c6bf6e: Link UP Jan 13 20:25:06.941749 systemd-networkd[1377]: cali5ec59c6bf6e: Gained carrier Jan 13 20:25:06.950691 kubelet[1984]: E0113 20:25:06.950657 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.814 [INFO][3452] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default bfa558bf-5f60-4476-8312-5613896eb023 1865 0 2025-01-13 20:24:54 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.814 [INFO][3452] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.847 [INFO][3462] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" HandleID="k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Workload="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.875 [INFO][3462] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" HandleID="k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003326c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-01-13 20:25:06.847750954 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.875 [INFO][3462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.875 [INFO][3462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.875 [INFO][3462] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.880 [INFO][3462] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.889 [INFO][3462] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.899 [INFO][3462] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.903 [INFO][3462] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.908 [INFO][3462] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.908 [INFO][3462] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.912 [INFO][3462] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8 Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.919 [INFO][3462] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.934 [INFO][3462] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.934 [INFO][3462] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" host="10.0.0.4" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.934 [INFO][3462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.934 [INFO][3462] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" HandleID="k8s-pod-network.dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Workload="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:25:06.957364 containerd[1480]: 2025-01-13 20:25:06.937 [INFO][3452] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"bfa558bf-5f60-4476-8312-5613896eb023", ResourceVersion:"1865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:25:06.958718 containerd[1480]: 2025-01-13 20:25:06.937 [INFO][3452] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:25:06.958718 containerd[1480]: 2025-01-13 20:25:06.937 [INFO][3452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:25:06.958718 containerd[1480]: 2025-01-13 20:25:06.942 [INFO][3452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:25:06.958718 containerd[1480]: 2025-01-13 20:25:06.943 [INFO][3452] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"bfa558bf-5f60-4476-8312-5613896eb023", ResourceVersion:"1865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"92:82:02:14:ae:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:25:06.958718 containerd[1480]: 2025-01-13 20:25:06.954 [INFO][3452] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:25:06.981826 containerd[1480]: time="2025-01-13T20:25:06.981578963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:25:06.981826 containerd[1480]: time="2025-01-13T20:25:06.981636603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:25:06.981826 containerd[1480]: time="2025-01-13T20:25:06.981648483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:06.981826 containerd[1480]: time="2025-01-13T20:25:06.981751163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:07.002945 systemd[1]: Started cri-containerd-dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8.scope - libcontainer container dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8. Jan 13 20:25:07.035492 containerd[1480]: time="2025-01-13T20:25:07.035340304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:bfa558bf-5f60-4476-8312-5613896eb023,Namespace:default,Attempt:0,} returns sandbox id \"dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8\"" Jan 13 20:25:07.038654 containerd[1480]: time="2025-01-13T20:25:07.038384030Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:25:07.406438 containerd[1480]: time="2025-01-13T20:25:07.404549266Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:25:07.406438 containerd[1480]: time="2025-01-13T20:25:07.405064267Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:25:07.411538 containerd[1480]: time="2025-01-13T20:25:07.411485040Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 373.04805ms" Jan 13 20:25:07.411538 containerd[1480]: time="2025-01-13T20:25:07.411533480Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:25:07.415181 containerd[1480]: time="2025-01-13T20:25:07.415045007Z" level=info msg="CreateContainer within sandbox \"dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:25:07.438183 containerd[1480]: time="2025-01-13T20:25:07.438110335Z" level=info msg="CreateContainer within sandbox \"dbbc701942ef180855d801101dcd336e7c48301dd235f58e5cd7e4538f4fc0e8\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b1476e030b9483cefd122de1013be9d798e001cf130da04659be1036ddd260dd\"" Jan 13 20:25:07.439196 containerd[1480]: time="2025-01-13T20:25:07.439153497Z" level=info msg="StartContainer for \"b1476e030b9483cefd122de1013be9d798e001cf130da04659be1036ddd260dd\"" Jan 13 20:25:07.470647 systemd[1]: Started cri-containerd-b1476e030b9483cefd122de1013be9d798e001cf130da04659be1036ddd260dd.scope - libcontainer container b1476e030b9483cefd122de1013be9d798e001cf130da04659be1036ddd260dd. Jan 13 20:25:07.499242 containerd[1480]: time="2025-01-13T20:25:07.499173101Z" level=info msg="StartContainer for \"b1476e030b9483cefd122de1013be9d798e001cf130da04659be1036ddd260dd\" returns successfully" Jan 13 20:25:07.950952 kubelet[1984]: E0113 20:25:07.950826 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:08.815853 systemd-networkd[1377]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 20:25:08.952139 kubelet[1984]: E0113 20:25:08.952065 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:09.952750 kubelet[1984]: E0113 20:25:09.952592 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:10.953649 kubelet[1984]: E0113 20:25:10.953514 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:11.954279 kubelet[1984]: E0113 20:25:11.954204 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:12.954725 kubelet[1984]: E0113 20:25:12.954646 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:13.955936 kubelet[1984]: E0113 20:25:13.955854 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:14.956773 kubelet[1984]: E0113 20:25:14.956709 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:15.957591 kubelet[1984]: E0113 20:25:15.957514 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:16.958283 kubelet[1984]: E0113 20:25:16.958212 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:17.959050 kubelet[1984]: E0113 20:25:17.958975 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:18.915922 kubelet[1984]: E0113 20:25:18.915799 1984 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:18.939432 containerd[1480]: time="2025-01-13T20:25:18.939345012Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\"" Jan 13 20:25:18.939870 containerd[1480]: time="2025-01-13T20:25:18.939534533Z" level=info msg="TearDown network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" successfully" Jan 13 20:25:18.939870 containerd[1480]: time="2025-01-13T20:25:18.939558693Z" level=info msg="StopPodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" returns successfully" Jan 13 20:25:18.940818 containerd[1480]: time="2025-01-13T20:25:18.940318258Z" level=info msg="RemovePodSandbox for \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\"" Jan 13 20:25:18.940818 containerd[1480]: time="2025-01-13T20:25:18.940345498Z" level=info msg="Forcibly stopping sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\"" Jan 13 20:25:18.940818 containerd[1480]: time="2025-01-13T20:25:18.940432619Z" level=info msg="TearDown network for sandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" successfully" Jan 13 20:25:18.943642 containerd[1480]: time="2025-01-13T20:25:18.943605600Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:18.943718 containerd[1480]: time="2025-01-13T20:25:18.943694401Z" level=info msg="RemovePodSandbox \"431e7c7dd3a5efe6dba8061dbcbe568ccf6a3474aae71d0a9ae9fad6ba53f650\" returns successfully" Jan 13 20:25:18.944595 containerd[1480]: time="2025-01-13T20:25:18.944290644Z" level=info msg="StopPodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\"" Jan 13 20:25:18.944595 containerd[1480]: time="2025-01-13T20:25:18.944381205Z" level=info msg="TearDown network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" successfully" Jan 13 20:25:18.944595 containerd[1480]: time="2025-01-13T20:25:18.944391645Z" level=info msg="StopPodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" returns successfully" Jan 13 20:25:18.944712 containerd[1480]: time="2025-01-13T20:25:18.944674847Z" level=info msg="RemovePodSandbox for \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\"" Jan 13 20:25:18.944712 containerd[1480]: time="2025-01-13T20:25:18.944698207Z" level=info msg="Forcibly stopping sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\"" Jan 13 20:25:18.944959 containerd[1480]: time="2025-01-13T20:25:18.944763568Z" level=info msg="TearDown network for sandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" successfully" Jan 13 20:25:18.947442 containerd[1480]: time="2025-01-13T20:25:18.947392745Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:18.947560 containerd[1480]: time="2025-01-13T20:25:18.947459066Z" level=info msg="RemovePodSandbox \"aaf54ba1a2cac7bef517ee373fdcca2bb83d4501868ddb3788c7c5fe9a9571a1\" returns successfully" Jan 13 20:25:18.948220 containerd[1480]: time="2025-01-13T20:25:18.948094070Z" level=info msg="StopPodSandbox for \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\"" Jan 13 20:25:18.948220 containerd[1480]: time="2025-01-13T20:25:18.948192950Z" level=info msg="TearDown network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\" successfully" Jan 13 20:25:18.948220 containerd[1480]: time="2025-01-13T20:25:18.948203031Z" level=info msg="StopPodSandbox for \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\" returns successfully" Jan 13 20:25:18.949540 containerd[1480]: time="2025-01-13T20:25:18.948628593Z" level=info msg="RemovePodSandbox for \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\"" Jan 13 20:25:18.949540 containerd[1480]: time="2025-01-13T20:25:18.948653434Z" level=info msg="Forcibly stopping sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\"" Jan 13 20:25:18.949540 containerd[1480]: time="2025-01-13T20:25:18.948708834Z" level=info msg="TearDown network for sandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\" successfully" Jan 13 20:25:18.951340 containerd[1480]: time="2025-01-13T20:25:18.951303411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:18.951500 containerd[1480]: time="2025-01-13T20:25:18.951482412Z" level=info msg="RemovePodSandbox \"fa4aa512205e26f0bc1164974a81456662262d21dd001a721b2a655ef83dbc6b\" returns successfully" Jan 13 20:25:18.951861 containerd[1480]: time="2025-01-13T20:25:18.951838935Z" level=info msg="StopPodSandbox for \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\"" Jan 13 20:25:18.952042 containerd[1480]: time="2025-01-13T20:25:18.952023496Z" level=info msg="TearDown network for sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\" successfully" Jan 13 20:25:18.952103 containerd[1480]: time="2025-01-13T20:25:18.952091216Z" level=info msg="StopPodSandbox for \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\" returns successfully" Jan 13 20:25:18.952475 containerd[1480]: time="2025-01-13T20:25:18.952452459Z" level=info msg="RemovePodSandbox for \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\"" Jan 13 20:25:18.952575 containerd[1480]: time="2025-01-13T20:25:18.952559740Z" level=info msg="Forcibly stopping sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\"" Jan 13 20:25:18.952685 containerd[1480]: time="2025-01-13T20:25:18.952669100Z" level=info msg="TearDown network for sandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\" successfully" Jan 13 20:25:18.955175 containerd[1480]: time="2025-01-13T20:25:18.955123557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:18.955470 containerd[1480]: time="2025-01-13T20:25:18.955384358Z" level=info msg="RemovePodSandbox \"db3a08536532760e90abe435e208f03c0b8197780d9ec691debc8e230c7c061f\" returns successfully" Jan 13 20:25:18.959824 kubelet[1984]: E0113 20:25:18.959791 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:19.960918 kubelet[1984]: E0113 20:25:19.960825 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:20.961107 kubelet[1984]: E0113 20:25:20.960999 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:21.961751 kubelet[1984]: E0113 20:25:21.961668 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:22.962798 kubelet[1984]: E0113 20:25:22.962720 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:23.964022 kubelet[1984]: E0113 20:25:23.963951 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:24.964185 kubelet[1984]: E0113 20:25:24.964097 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:25.964738 kubelet[1984]: E0113 20:25:25.964659 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:26.965741 kubelet[1984]: E0113 20:25:26.965684 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:27.966832 kubelet[1984]: E0113 20:25:27.966752 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:28.967564 kubelet[1984]: E0113 20:25:28.967490 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:29.968704 kubelet[1984]: E0113 20:25:29.968650 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:30.518280 kubelet[1984]: E0113 20:25:30.518103 1984 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:25:20Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:25:20Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:25:20Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:25:20Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\\\",\\\"ghcr.io/flatcar/calico/node:v3.29.1\\\"],\\\"sizeBytes\\\":137671624},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\\\",\\\"ghcr.io/flatcar/calico/cni:v3.29.1\\\"],\\\"sizeBytes\\\":91072777},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":67696923},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\\\",\\\"registry.k8s.io/kube-proxy:v1.31.4\\\"],\\\"sizeBytes\\\":26770445},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\\\"],\\\"sizeBytes\\\":11252974},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\\\",\\\"ghcr.io/flatcar/calico/csi:v3.29.1\\\"],\\\"sizeBytes\\\":8834384},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\\\"],\\\"sizeBytes\\\":6487425},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"10.0.0.4\": Patch \"https://138.199.153.205:6443/api/v1/nodes/10.0.0.4/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:25:30.682462 kubelet[1984]: E0113 20:25:30.682261 1984 controller.go:195] "Failed to update lease" err="Put \"https://138.199.153.205:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:25:30.969679 kubelet[1984]: E0113 20:25:30.969588 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:31.970190 kubelet[1984]: E0113 20:25:31.970049 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:32.971359 kubelet[1984]: E0113 20:25:32.971279 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:33.972213 kubelet[1984]: E0113 20:25:33.972137 1984 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"