Dec 13 01:48:20.920649 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:48:20.920671 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:48:20.920689 kernel: KASLR enabled Dec 13 01:48:20.920695 kernel: efi: EFI v2.7 by EDK II Dec 13 01:48:20.920700 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:48:20.920706 kernel: random: crng init done Dec 13 01:48:20.920713 kernel: ACPI: Early table checksum verification disabled Dec 13 01:48:20.920719 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:48:20.920725 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:48:20.920733 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920739 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920745 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920751 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920757 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920765 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920773 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920780 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920786 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:48:20.920792 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:48:20.920799 kernel: NUMA: Failed to initialise from firmware Dec 13 01:48:20.920806 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:48:20.920812 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 01:48:20.920818 kernel: Zone ranges: Dec 13 01:48:20.920824 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:48:20.920830 kernel: DMA32 empty Dec 13 01:48:20.920838 kernel: Normal empty Dec 13 01:48:20.920844 kernel: Movable zone start for each node Dec 13 01:48:20.920850 kernel: Early memory node ranges Dec 13 01:48:20.920857 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:48:20.920863 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:48:20.920870 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:48:20.920876 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:48:20.920882 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:48:20.920889 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:48:20.920895 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:48:20.920902 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:48:20.920908 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:48:20.920916 kernel: psci: probing for conduit method from ACPI. Dec 13 01:48:20.920922 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:48:20.920928 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:48:20.920937 kernel: psci: Trusted OS migration not required Dec 13 01:48:20.920944 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:48:20.920951 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:48:20.920959 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:48:20.920965 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:48:20.920973 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:48:20.920979 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:48:20.920986 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:48:20.920993 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:48:20.920999 kernel: CPU features: detected: Spectre-v4 Dec 13 01:48:20.921006 kernel: CPU features: detected: Spectre-BHB Dec 13 01:48:20.921012 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:48:20.921020 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:48:20.921032 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:48:20.921039 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:48:20.921046 kernel: alternatives: applying boot alternatives Dec 13 01:48:20.921054 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:48:20.921061 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:48:20.921068 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:48:20.921075 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:48:20.921082 kernel: Fallback order for Node 0: 0 Dec 13 01:48:20.921089 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:48:20.921095 kernel: Policy zone: DMA Dec 13 01:48:20.921102 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:48:20.921110 kernel: software IO TLB: area num 4. Dec 13 01:48:20.921117 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:48:20.921124 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Dec 13 01:48:20.921131 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:48:20.921138 kernel: trace event string verifier disabled Dec 13 01:48:20.921145 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:48:20.921152 kernel: rcu: RCU event tracing is enabled. Dec 13 01:48:20.921159 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:48:20.921166 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:48:20.921173 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:48:20.921179 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:48:20.921186 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:48:20.921194 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:48:20.921201 kernel: GICv3: 256 SPIs implemented Dec 13 01:48:20.921208 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:48:20.921214 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:48:20.921221 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:48:20.921228 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:48:20.921235 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:48:20.921242 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:48:20.921249 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:48:20.921255 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:48:20.921262 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:48:20.921270 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:48:20.921277 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:48:20.921283 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:48:20.921290 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:48:20.921297 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:48:20.921304 kernel: arm-pv: using stolen time PV Dec 13 01:48:20.921311 kernel: Console: colour dummy device 80x25 Dec 13 01:48:20.921318 kernel: ACPI: Core revision 20230628 Dec 13 01:48:20.921325 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:48:20.921332 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:48:20.921340 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:48:20.921347 kernel: landlock: Up and running. Dec 13 01:48:20.921354 kernel: SELinux: Initializing. Dec 13 01:48:20.921361 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:48:20.921368 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:48:20.921375 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:48:20.921382 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:48:20.921388 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:48:20.921395 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:48:20.921403 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:48:20.921410 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:48:20.921417 kernel: Remapping and enabling EFI services. Dec 13 01:48:20.921424 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:48:20.921430 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:48:20.921437 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:48:20.921444 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:48:20.921451 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:48:20.921458 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:48:20.921465 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:48:20.921473 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:48:20.921481 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:48:20.921492 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:48:20.921501 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:48:20.921508 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:48:20.921515 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:48:20.921535 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:48:20.921543 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:48:20.921550 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:48:20.921560 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:48:20.921567 kernel: SMP: Total of 4 processors activated. Dec 13 01:48:20.921574 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:48:20.921582 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:48:20.921589 kernel: CPU features: detected: Common not Private translations Dec 13 01:48:20.921666 kernel: CPU features: detected: CRC32 instructions Dec 13 01:48:20.921678 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:48:20.921692 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:48:20.921702 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:48:20.921710 kernel: CPU features: detected: Privileged Access Never Dec 13 01:48:20.921717 kernel: CPU features: detected: RAS Extension Support Dec 13 01:48:20.921724 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:48:20.921731 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:48:20.921739 kernel: alternatives: applying system-wide alternatives Dec 13 01:48:20.921746 kernel: devtmpfs: initialized Dec 13 01:48:20.921753 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:48:20.921760 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:48:20.921769 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:48:20.921776 kernel: SMBIOS 3.0.0 present. Dec 13 01:48:20.921783 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:48:20.921790 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:48:20.921798 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:48:20.921805 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:48:20.921812 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:48:20.921819 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:48:20.921827 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Dec 13 01:48:20.921835 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:48:20.921842 kernel: cpuidle: using governor menu Dec 13 01:48:20.921850 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:48:20.921857 kernel: ASID allocator initialised with 32768 entries Dec 13 01:48:20.921865 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:48:20.921872 kernel: Serial: AMBA PL011 UART driver Dec 13 01:48:20.921879 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:48:20.921886 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:48:20.921893 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:48:20.921902 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:48:20.921909 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:48:20.921917 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:48:20.921924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:48:20.921931 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:48:20.921938 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:48:20.921946 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:48:20.921953 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:48:20.921960 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:48:20.921969 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:48:20.921976 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:48:20.921983 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:48:20.921990 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:48:20.921998 kernel: ACPI: Interpreter enabled Dec 13 01:48:20.922005 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:48:20.922012 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:48:20.922020 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:48:20.922027 kernel: printk: console [ttyAMA0] enabled Dec 13 01:48:20.922035 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:48:20.922165 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:48:20.922239 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:48:20.922308 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:48:20.922371 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:48:20.922435 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:48:20.922445 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:48:20.922455 kernel: PCI host bridge to bus 0000:00 Dec 13 01:48:20.922525 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:48:20.922584 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:48:20.922663 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:48:20.922737 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:48:20.922818 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:48:20.922894 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:48:20.922966 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:48:20.923034 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:48:20.923102 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:48:20.923172 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:48:20.923239 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:48:20.923305 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:48:20.923365 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:48:20.923425 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:48:20.923483 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:48:20.923493 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:48:20.923500 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:48:20.923507 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:48:20.923514 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:48:20.923522 kernel: iommu: Default domain type: Translated Dec 13 01:48:20.923529 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:48:20.923538 kernel: efivars: Registered efivars operations Dec 13 01:48:20.923545 kernel: vgaarb: loaded Dec 13 01:48:20.923552 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:48:20.923560 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:48:20.923567 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:48:20.923574 kernel: pnp: PnP ACPI init Dec 13 01:48:20.923659 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:48:20.923670 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:48:20.923684 kernel: NET: Registered PF_INET protocol family Dec 13 01:48:20.923693 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:48:20.923700 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:48:20.923708 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:48:20.923716 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:48:20.923723 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:48:20.923730 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:48:20.923738 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:48:20.923745 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:48:20.923754 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:48:20.923762 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:48:20.923769 kernel: kvm [1]: HYP mode not available Dec 13 01:48:20.923776 kernel: Initialise system trusted keyrings Dec 13 01:48:20.923784 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:48:20.923791 kernel: Key type asymmetric registered Dec 13 01:48:20.923798 kernel: Asymmetric key parser 'x509' registered Dec 13 01:48:20.923806 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:48:20.923813 kernel: io scheduler mq-deadline registered Dec 13 01:48:20.923821 kernel: io scheduler kyber registered Dec 13 01:48:20.923828 kernel: io scheduler bfq registered Dec 13 01:48:20.923836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:48:20.923843 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:48:20.923851 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:48:20.923920 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:48:20.923930 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:48:20.923937 kernel: thunder_xcv, ver 1.0 Dec 13 01:48:20.923944 kernel: thunder_bgx, ver 1.0 Dec 13 01:48:20.923954 kernel: nicpf, ver 1.0 Dec 13 01:48:20.923961 kernel: nicvf, ver 1.0 Dec 13 01:48:20.924035 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:48:20.924099 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:48:20 UTC (1734054500) Dec 13 01:48:20.924109 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:48:20.924116 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:48:20.924124 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:48:20.924131 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:48:20.924140 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:48:20.924147 kernel: Segment Routing with IPv6 Dec 13 01:48:20.924154 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:48:20.924161 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:48:20.924168 kernel: Key type dns_resolver registered Dec 13 01:48:20.924175 kernel: registered taskstats version 1 Dec 13 01:48:20.924183 kernel: Loading compiled-in X.509 certificates Dec 13 01:48:20.924190 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:48:20.924197 kernel: Key type .fscrypt registered Dec 13 01:48:20.924205 kernel: Key type fscrypt-provisioning registered Dec 13 01:48:20.924213 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:48:20.924220 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:48:20.924227 kernel: ima: No architecture policies found Dec 13 01:48:20.924235 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:48:20.924242 kernel: clk: Disabling unused clocks Dec 13 01:48:20.924249 kernel: Freeing unused kernel memory: 39360K Dec 13 01:48:20.924256 kernel: Run /init as init process Dec 13 01:48:20.924263 kernel: with arguments: Dec 13 01:48:20.924271 kernel: /init Dec 13 01:48:20.924278 kernel: with environment: Dec 13 01:48:20.924285 kernel: HOME=/ Dec 13 01:48:20.924293 kernel: TERM=linux Dec 13 01:48:20.924299 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:48:20.924309 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:48:20.924318 systemd[1]: Detected virtualization kvm. Dec 13 01:48:20.924326 systemd[1]: Detected architecture arm64. Dec 13 01:48:20.924335 systemd[1]: Running in initrd. Dec 13 01:48:20.924342 systemd[1]: No hostname configured, using default hostname. Dec 13 01:48:20.924350 systemd[1]: Hostname set to . Dec 13 01:48:20.924358 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:48:20.924365 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:48:20.924373 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:48:20.924381 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:48:20.924389 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:48:20.924399 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:48:20.924407 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:48:20.924415 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:48:20.924424 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:48:20.924432 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:48:20.924440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:48:20.924448 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:48:20.924457 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:48:20.924465 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:48:20.924472 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:48:20.924480 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:48:20.924488 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:48:20.924495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:48:20.924503 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:48:20.924511 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:48:20.924520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:48:20.924528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:48:20.924536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:48:20.924544 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:48:20.924551 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:48:20.924559 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:48:20.924567 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:48:20.924574 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:48:20.924582 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:48:20.924591 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:48:20.924624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:48:20.924632 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:48:20.924640 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:48:20.924648 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:48:20.924657 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:48:20.924667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:48:20.924675 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:48:20.924688 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:48:20.924697 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:48:20.924705 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:48:20.924729 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 01:48:20.924749 kernel: Bridge firewalling registered Dec 13 01:48:20.924757 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:48:20.924765 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:48:20.924774 systemd-journald[237]: Journal started Dec 13 01:48:20.924794 systemd-journald[237]: Runtime Journal (/run/log/journal/9cb72119960942f397351f93b9076d85) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:48:20.903189 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 01:48:20.920559 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 01:48:20.929432 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:48:20.929463 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:48:20.933655 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:48:20.936389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:48:20.939749 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:48:20.941627 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:48:20.945791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:48:20.949154 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:48:20.956631 dracut-cmdline[277]: dracut-dracut-053 Dec 13 01:48:20.958921 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:48:20.977064 systemd-resolved[282]: Positive Trust Anchors: Dec 13 01:48:20.977084 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:48:20.977119 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:48:20.981840 systemd-resolved[282]: Defaulting to hostname 'linux'. Dec 13 01:48:20.984013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:48:20.984991 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:48:21.028631 kernel: SCSI subsystem initialized Dec 13 01:48:21.032619 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:48:21.040632 kernel: iscsi: registered transport (tcp) Dec 13 01:48:21.060644 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:48:21.060696 kernel: QLogic iSCSI HBA Driver Dec 13 01:48:21.125790 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:48:21.138789 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:48:21.163786 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:48:21.163838 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:48:21.164619 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:48:21.230629 kernel: raid6: neonx8 gen() 15739 MB/s Dec 13 01:48:21.248077 kernel: raid6: neonx4 gen() 11799 MB/s Dec 13 01:48:21.264626 kernel: raid6: neonx2 gen() 9467 MB/s Dec 13 01:48:21.281626 kernel: raid6: neonx1 gen() 10404 MB/s Dec 13 01:48:21.298617 kernel: raid6: int64x8 gen() 6933 MB/s Dec 13 01:48:21.315617 kernel: raid6: int64x4 gen() 7344 MB/s Dec 13 01:48:21.332616 kernel: raid6: int64x2 gen() 6137 MB/s Dec 13 01:48:21.349616 kernel: raid6: int64x1 gen() 5063 MB/s Dec 13 01:48:21.349631 kernel: raid6: using algorithm neonx8 gen() 15739 MB/s Dec 13 01:48:21.366625 kernel: raid6: .... xor() 11685 MB/s, rmw enabled Dec 13 01:48:21.366643 kernel: raid6: using neon recovery algorithm Dec 13 01:48:21.372617 kernel: xor: measuring software checksum speed Dec 13 01:48:21.372644 kernel: 8regs : 15843 MB/sec Dec 13 01:48:21.374044 kernel: 32regs : 17815 MB/sec Dec 13 01:48:21.374058 kernel: arm64_neon : 22834 MB/sec Dec 13 01:48:21.374068 kernel: xor: using function: arm64_neon (22834 MB/sec) Dec 13 01:48:21.433634 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:48:21.444951 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:48:21.454853 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:48:21.471805 systemd-udevd[462]: Using default interface naming scheme 'v255'. Dec 13 01:48:21.475930 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:48:21.482776 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:48:21.496759 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Dec 13 01:48:21.529231 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:48:21.546851 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:48:21.586505 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:48:21.593785 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:48:21.608776 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:48:21.610268 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:48:21.614753 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:48:21.616252 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:48:21.622957 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:48:21.632329 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:48:21.645190 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:48:21.649490 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:48:21.649591 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:48:21.649624 kernel: GPT:9289727 != 19775487 Dec 13 01:48:21.649634 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:48:21.649644 kernel: GPT:9289727 != 19775487 Dec 13 01:48:21.649652 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:48:21.649661 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:48:21.648098 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:48:21.648206 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:48:21.650915 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:48:21.653716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:48:21.653835 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:48:21.655317 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:48:21.661866 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:48:21.671626 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (518) Dec 13 01:48:21.672645 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (525) Dec 13 01:48:21.677060 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:48:21.684400 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:48:21.688688 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:48:21.692225 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:48:21.693157 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:48:21.698137 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:48:21.707739 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:48:21.709725 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:48:21.715526 disk-uuid[553]: Primary Header is updated. Dec 13 01:48:21.715526 disk-uuid[553]: Secondary Entries is updated. Dec 13 01:48:21.715526 disk-uuid[553]: Secondary Header is updated. Dec 13 01:48:21.718639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:48:21.729889 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:48:22.735518 disk-uuid[554]: The operation has completed successfully. Dec 13 01:48:22.736504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:48:22.765893 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:48:22.765994 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:48:22.784846 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:48:22.787928 sh[576]: Success Dec 13 01:48:22.806634 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:48:22.856420 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:48:22.872974 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:48:22.874700 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:48:22.885409 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:48:22.885458 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:48:22.885469 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:48:22.885480 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:48:22.885982 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:48:22.891388 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:48:22.892610 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:48:22.904817 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:48:22.906232 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:48:22.913665 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:48:22.913710 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:48:22.914891 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:48:22.916619 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:48:22.926657 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:48:22.928622 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:48:22.940396 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:48:22.955022 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:48:23.016678 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:48:23.025813 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:48:23.053484 systemd-networkd[764]: lo: Link UP Dec 13 01:48:23.053495 systemd-networkd[764]: lo: Gained carrier Dec 13 01:48:23.055029 systemd-networkd[764]: Enumeration completed Dec 13 01:48:23.055127 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:48:23.055521 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:48:23.055524 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:48:23.056434 systemd-networkd[764]: eth0: Link UP Dec 13 01:48:23.063021 ignition[674]: Ignition 2.19.0 Dec 13 01:48:23.056437 systemd-networkd[764]: eth0: Gained carrier Dec 13 01:48:23.063028 ignition[674]: Stage: fetch-offline Dec 13 01:48:23.056445 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:48:23.063067 ignition[674]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:48:23.056468 systemd[1]: Reached target network.target - Network. Dec 13 01:48:23.063075 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:48:23.072665 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:48:23.063235 ignition[674]: parsed url from cmdline: "" Dec 13 01:48:23.063238 ignition[674]: no config URL provided Dec 13 01:48:23.063243 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:48:23.063249 ignition[674]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:48:23.063272 ignition[674]: op(1): [started] loading QEMU firmware config module Dec 13 01:48:23.063276 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:48:23.072204 ignition[674]: op(1): [finished] loading QEMU firmware config module Dec 13 01:48:23.082739 ignition[674]: parsing config with SHA512: 3b3ed75d4957646bb99d944d74f0725576f8deb48c1c0b0ca00c3aaa4fcd1303f824f42420be5fbf8db947488eb5937462d976ae0c0f2749f886eb738dd9fd60 Dec 13 01:48:23.085741 unknown[674]: fetched base config from "system" Dec 13 01:48:23.085751 unknown[674]: fetched user config from "qemu" Dec 13 01:48:23.086011 ignition[674]: fetch-offline: fetch-offline passed Dec 13 01:48:23.086071 ignition[674]: Ignition finished successfully Dec 13 01:48:23.088090 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:48:23.089378 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:48:23.095770 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:48:23.107397 ignition[778]: Ignition 2.19.0 Dec 13 01:48:23.107408 ignition[778]: Stage: kargs Dec 13 01:48:23.108452 ignition[778]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:48:23.108471 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:48:23.109295 ignition[778]: kargs: kargs passed Dec 13 01:48:23.109349 ignition[778]: Ignition finished successfully Dec 13 01:48:23.111488 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:48:23.121529 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:48:23.134398 ignition[788]: Ignition 2.19.0 Dec 13 01:48:23.134408 ignition[788]: Stage: disks Dec 13 01:48:23.134574 ignition[788]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:48:23.134584 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:48:23.135273 ignition[788]: disks: disks passed Dec 13 01:48:23.136988 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:48:23.135315 ignition[788]: Ignition finished successfully Dec 13 01:48:23.138205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:48:23.139389 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:48:23.140632 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:48:23.141959 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:48:23.143359 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:48:23.153766 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:48:23.165695 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:48:23.170132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:48:23.181721 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:48:23.247559 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:48:23.248721 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:48:23.248655 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:48:23.259691 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:48:23.261639 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:48:23.262440 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:48:23.262479 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:48:23.262502 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:48:23.268146 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:48:23.270984 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:48:23.274657 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Dec 13 01:48:23.274718 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:48:23.276276 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:48:23.276303 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:48:23.281623 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:48:23.280323 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:48:23.318488 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:48:23.323545 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:48:23.329129 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:48:23.334357 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:48:23.416938 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:48:23.431792 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:48:23.435878 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:48:23.438740 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:48:23.456523 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:48:23.458041 ignition[922]: INFO : Ignition 2.19.0 Dec 13 01:48:23.458041 ignition[922]: INFO : Stage: mount Dec 13 01:48:23.458041 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:48:23.458041 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:48:23.458041 ignition[922]: INFO : mount: mount passed Dec 13 01:48:23.461279 ignition[922]: INFO : Ignition finished successfully Dec 13 01:48:23.461237 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:48:23.467726 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:48:23.883951 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:48:23.893850 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:48:23.908089 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (934) Dec 13 01:48:23.908134 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:48:23.908145 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:48:23.908744 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:48:23.913621 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:48:23.914229 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:48:23.934508 ignition[951]: INFO : Ignition 2.19.0 Dec 13 01:48:23.934508 ignition[951]: INFO : Stage: files Dec 13 01:48:23.935752 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:48:23.935752 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:48:23.935752 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:48:23.938341 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:48:23.938341 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:48:23.938341 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:48:23.938341 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:48:23.942438 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:48:23.942438 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:48:23.942438 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:48:23.942438 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:48:23.942438 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:48:23.942438 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:48:23.942438 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:48:23.942438 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:48:23.942438 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:48:23.938707 unknown[951]: wrote ssh authorized keys file for user: core Dec 13 01:48:24.281418 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:48:24.569792 systemd-networkd[764]: eth0: Gained IPv6LL Dec 13 01:48:24.693359 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:48:24.693359 ignition[951]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 01:48:24.696077 ignition[951]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:48:24.696077 ignition[951]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:48:24.696077 ignition[951]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 01:48:24.696077 ignition[951]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:48:24.718216 ignition[951]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:48:24.722024 ignition[951]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:48:24.723095 ignition[951]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:48:24.723095 ignition[951]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:48:24.723095 ignition[951]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:48:24.723095 ignition[951]: INFO : files: files passed Dec 13 01:48:24.723095 ignition[951]: INFO : Ignition finished successfully Dec 13 01:48:24.724869 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:48:24.735852 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:48:24.739452 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:48:24.742132 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:48:24.742239 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:48:24.745828 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:48:24.749228 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:48:24.750445 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:48:24.751580 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:48:24.751310 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:48:24.752883 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:48:24.763771 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:48:24.783575 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:48:24.783713 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:48:24.785375 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:48:24.786793 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:48:24.788138 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:48:24.788936 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:48:24.806207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:48:24.816816 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:48:24.824974 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:48:24.825943 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:48:24.827421 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:48:24.828702 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:48:24.828819 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:48:24.830646 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:48:24.832120 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:48:24.833276 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:48:24.834490 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:48:24.835902 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:48:24.837317 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:48:24.838632 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:48:24.840060 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:48:24.841427 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:48:24.842837 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:48:24.843934 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:48:24.844058 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:48:24.845744 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:48:24.847169 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:48:24.848528 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:48:24.851692 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:48:24.852628 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:48:24.852759 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:48:24.854878 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:48:24.854992 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:48:24.856399 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:48:24.857537 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:48:24.860701 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:48:24.861710 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:48:24.863320 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:48:24.864502 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:48:24.864582 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:48:24.865733 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:48:24.865831 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:48:24.866920 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:48:24.867026 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:48:24.868329 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:48:24.868424 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:48:24.880773 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:48:24.881441 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:48:24.881563 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:48:24.884206 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:48:24.885132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:48:24.885258 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:48:24.886590 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:48:24.886865 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:48:24.890644 ignition[1005]: INFO : Ignition 2.19.0 Dec 13 01:48:24.890644 ignition[1005]: INFO : Stage: umount Dec 13 01:48:24.892208 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:48:24.892208 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:48:24.892208 ignition[1005]: INFO : umount: umount passed Dec 13 01:48:24.892208 ignition[1005]: INFO : Ignition finished successfully Dec 13 01:48:24.894874 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:48:24.894958 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:48:24.897753 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:48:24.899896 systemd[1]: Stopped target network.target - Network. Dec 13 01:48:24.901013 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:48:24.901079 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:48:24.902368 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:48:24.902413 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:48:24.903534 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:48:24.903646 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:48:24.909407 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:48:24.909468 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:48:24.910836 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:48:24.912430 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:48:24.914163 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:48:24.914257 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:48:24.921653 systemd-networkd[764]: eth0: DHCPv6 lease lost Dec 13 01:48:24.923854 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:48:24.924697 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:48:24.926635 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:48:24.926801 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:48:24.929152 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:48:24.929232 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:48:24.940985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:48:24.941648 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:48:24.941711 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:48:24.943163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:48:24.943199 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:48:24.944418 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:48:24.944453 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:48:24.946041 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:48:24.946080 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:48:24.947511 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:48:24.957084 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:48:24.957190 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:48:24.958898 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:48:24.958997 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:48:24.960453 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:48:24.960509 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:48:24.967452 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:48:24.967585 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:48:24.970114 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:48:24.970165 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:48:24.974041 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:48:24.974070 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:48:24.975305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:48:24.975344 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:48:24.977761 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:48:24.977797 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:48:24.979186 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:48:24.979228 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:48:24.988767 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:48:24.989592 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:48:24.989685 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:48:24.991271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:48:24.991313 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:48:24.995983 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:48:24.996068 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:48:24.999242 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:48:25.001785 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:48:25.010770 systemd[1]: Switching root. Dec 13 01:48:25.036768 systemd-journald[237]: Journal stopped Dec 13 01:48:25.740836 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 01:48:25.740892 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:48:25.740905 kernel: SELinux: policy capability open_perms=1 Dec 13 01:48:25.740915 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:48:25.740925 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:48:25.740938 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:48:25.740948 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:48:25.740957 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:48:25.740967 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:48:25.740977 kernel: audit: type=1403 audit(1734054505.169:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:48:25.740988 systemd[1]: Successfully loaded SELinux policy in 34.477ms. Dec 13 01:48:25.741008 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.873ms. Dec 13 01:48:25.741022 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:48:25.741034 systemd[1]: Detected virtualization kvm. Dec 13 01:48:25.741046 systemd[1]: Detected architecture arm64. Dec 13 01:48:25.741057 systemd[1]: Detected first boot. Dec 13 01:48:25.741068 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:48:25.741078 zram_generator::config[1050]: No configuration found. Dec 13 01:48:25.741094 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:48:25.741104 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:48:25.741115 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:48:25.741127 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:48:25.741139 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:48:25.741150 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:48:25.741160 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:48:25.741171 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:48:25.741182 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:48:25.741193 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:48:25.741205 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:48:25.741216 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:48:25.741227 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:48:25.741238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:48:25.741249 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:48:25.741260 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:48:25.741271 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:48:25.741282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:48:25.741292 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:48:25.741304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:48:25.741315 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:48:25.741325 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:48:25.741336 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:48:25.741347 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:48:25.741359 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:48:25.741370 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:48:25.741383 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:48:25.741393 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:48:25.741404 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:48:25.741415 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:48:25.741429 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:48:25.741440 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:48:25.741451 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:48:25.741461 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:48:25.741472 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:48:25.741482 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:48:25.741495 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:48:25.741506 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:48:25.741517 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:48:25.741528 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:48:25.741539 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:48:25.741550 systemd[1]: Reached target machines.target - Containers. Dec 13 01:48:25.741561 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:48:25.741571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:48:25.741584 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:48:25.741595 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:48:25.741614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:48:25.741626 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:48:25.741637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:48:25.741647 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:48:25.741658 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:48:25.741674 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:48:25.741687 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:48:25.741698 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:48:25.741708 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:48:25.741719 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:48:25.741729 kernel: loop: module loaded Dec 13 01:48:25.741740 kernel: fuse: init (API version 7.39) Dec 13 01:48:25.741750 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:48:25.741761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:48:25.741771 kernel: ACPI: bus type drm_connector registered Dec 13 01:48:25.741781 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:48:25.741794 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:48:25.741804 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:48:25.741815 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:48:25.741844 systemd-journald[1114]: Collecting audit messages is disabled. Dec 13 01:48:25.741865 systemd[1]: Stopped verity-setup.service. Dec 13 01:48:25.741876 systemd-journald[1114]: Journal started Dec 13 01:48:25.741898 systemd-journald[1114]: Runtime Journal (/run/log/journal/9cb72119960942f397351f93b9076d85) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:48:25.549834 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:48:25.575728 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:48:25.576099 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:48:25.746182 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:48:25.746805 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:48:25.747699 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:48:25.749037 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:48:25.749946 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:48:25.750960 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:48:25.751966 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:48:25.753684 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:48:25.754882 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:48:25.756046 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:48:25.756189 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:48:25.757455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:25.757637 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:48:25.758721 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:48:25.758873 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:48:25.759991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:25.760135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:48:25.761490 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:48:25.761693 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:48:25.762931 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:25.763072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:48:25.764250 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:48:25.765509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:48:25.766828 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:48:25.778954 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:48:25.785722 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:48:25.787704 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:48:25.788558 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:48:25.788590 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:48:25.790353 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:48:25.792355 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:48:25.796813 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:48:25.797806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:48:25.799419 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:48:25.803742 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:48:25.806139 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:48:25.807136 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:48:25.808045 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:48:25.809844 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:48:25.814962 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:48:25.816847 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:48:25.818566 systemd-journald[1114]: Time spent on flushing to /var/log/journal/9cb72119960942f397351f93b9076d85 is 24.791ms for 839 entries. Dec 13 01:48:25.818566 systemd-journald[1114]: System Journal (/var/log/journal/9cb72119960942f397351f93b9076d85) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:48:25.857203 systemd-journald[1114]: Received client request to flush runtime journal. Dec 13 01:48:25.857252 kernel: loop0: detected capacity change from 0 to 189592 Dec 13 01:48:25.821672 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:48:25.823241 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:48:25.824424 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:48:25.827042 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:48:25.828346 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:48:25.833016 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:48:25.842854 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:48:25.850626 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:48:25.862240 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:48:25.864416 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:48:25.867533 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:48:25.876846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:48:25.881439 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:48:25.883234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:48:25.884570 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:48:25.892693 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:48:25.894632 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:48:25.898254 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Dec 13 01:48:25.898278 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Dec 13 01:48:25.904809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:48:25.929775 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 01:48:25.957624 kernel: loop3: detected capacity change from 0 to 189592 Dec 13 01:48:25.964616 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:48:25.969615 kernel: loop5: detected capacity change from 0 to 114432 Dec 13 01:48:25.972997 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:48:25.973381 (sd-merge)[1188]: Merged extensions into '/usr'. Dec 13 01:48:25.977350 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:48:25.977362 systemd[1]: Reloading... Dec 13 01:48:26.029637 zram_generator::config[1215]: No configuration found. Dec 13 01:48:26.095492 ldconfig[1156]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:48:26.126465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:26.162394 systemd[1]: Reloading finished in 184 ms. Dec 13 01:48:26.199288 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:48:26.200589 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:48:26.217797 systemd[1]: Starting ensure-sysext.service... Dec 13 01:48:26.219507 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:48:26.231112 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:48:26.231132 systemd[1]: Reloading... Dec 13 01:48:26.252712 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:48:26.252966 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:48:26.253586 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:48:26.253828 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 13 01:48:26.253875 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Dec 13 01:48:26.256676 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:48:26.256685 systemd-tmpfiles[1249]: Skipping /boot Dec 13 01:48:26.266595 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:48:26.266626 systemd-tmpfiles[1249]: Skipping /boot Dec 13 01:48:26.282637 zram_generator::config[1272]: No configuration found. Dec 13 01:48:26.363830 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:26.399504 systemd[1]: Reloading finished in 168 ms. Dec 13 01:48:26.413979 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:48:26.427082 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:48:26.433887 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:48:26.436068 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:48:26.438161 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:48:26.443941 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:48:26.451092 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:48:26.454911 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:48:26.458045 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:48:26.460245 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:48:26.461669 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:48:26.466987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:48:26.471898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:48:26.472727 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:48:26.477102 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:48:26.481005 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:48:26.482796 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:26.482946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:48:26.484291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:26.484456 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:48:26.485932 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:26.486081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:48:26.492733 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Dec 13 01:48:26.495083 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:48:26.497946 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:48:26.506500 augenrules[1344]: No rules Dec 13 01:48:26.511896 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:48:26.514847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:48:26.518731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:48:26.519384 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:48:26.520747 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:48:26.522095 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:48:26.523321 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:48:26.524898 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:48:26.526592 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:48:26.527969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:26.528103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:48:26.529575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:26.530193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:48:26.539382 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:26.539507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:48:26.548996 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1366) Dec 13 01:48:26.554490 systemd[1]: Finished ensure-sysext.service. Dec 13 01:48:26.555633 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1361) Dec 13 01:48:26.559097 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1361) Dec 13 01:48:26.566873 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:48:26.567582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:48:26.584214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:48:26.589392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:48:26.591863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:48:26.592726 systemd-resolved[1317]: Positive Trust Anchors: Dec 13 01:48:26.592743 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:48:26.592774 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:48:26.593772 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:48:26.595773 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:48:26.597804 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:48:26.600701 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:48:26.601488 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:48:26.601909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:48:26.602276 systemd-resolved[1317]: Defaulting to hostname 'linux'. Dec 13 01:48:26.602687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:48:26.603722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:48:26.605004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:48:26.606676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:48:26.607847 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:48:26.607986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:48:26.609153 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:48:26.609292 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:48:26.620291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:48:26.622952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:48:26.632830 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:48:26.633806 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:48:26.633867 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:48:26.651001 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:48:26.653927 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:48:26.655430 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:48:26.688866 systemd-networkd[1392]: lo: Link UP Dec 13 01:48:26.688879 systemd-networkd[1392]: lo: Gained carrier Dec 13 01:48:26.689538 systemd-networkd[1392]: Enumeration completed Dec 13 01:48:26.694908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:48:26.696171 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:48:26.696182 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:48:26.697350 systemd-networkd[1392]: eth0: Link UP Dec 13 01:48:26.697357 systemd-networkd[1392]: eth0: Gained carrier Dec 13 01:48:26.697370 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:48:26.698841 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:48:26.700299 systemd[1]: Reached target network.target - Network. Dec 13 01:48:26.702185 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:48:26.709640 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:48:26.712074 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:48:26.713698 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:48:26.714416 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Dec 13 01:48:27.214477 systemd-resolved[1317]: Clock change detected. Flushing caches. Dec 13 01:48:27.214580 systemd-timesyncd[1393]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:48:27.214635 systemd-timesyncd[1393]: Initial clock synchronization to Fri 2024-12-13 01:48:27.214439 UTC. Dec 13 01:48:27.230656 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:48:27.240681 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:48:27.262782 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:48:27.263906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:48:27.264760 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:48:27.265572 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:48:27.266513 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:48:27.267595 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:48:27.268483 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:48:27.269417 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:48:27.270322 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:48:27.270355 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:48:27.271007 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:48:27.272555 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:48:27.274530 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:48:27.284486 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:48:27.286616 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:48:27.287844 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:48:27.288711 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:48:27.289377 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:48:27.290148 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:48:27.290179 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:48:27.291047 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:48:27.292721 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:48:27.295784 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:48:27.296775 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:48:27.299228 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:48:27.301040 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:48:27.306629 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:48:27.308397 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:48:27.311863 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:48:27.315490 jq[1420]: false Dec 13 01:48:27.316095 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:48:27.321336 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:48:27.321732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:48:27.322313 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:48:27.325402 extend-filesystems[1421]: Found loop3 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found loop4 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found loop5 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found vda Dec 13 01:48:27.326885 extend-filesystems[1421]: Found vda1 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found vda2 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found vda3 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found usr Dec 13 01:48:27.326885 extend-filesystems[1421]: Found vda4 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found vda6 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found vda7 Dec 13 01:48:27.326885 extend-filesystems[1421]: Found vda9 Dec 13 01:48:27.326885 extend-filesystems[1421]: Checking size of /dev/vda9 Dec 13 01:48:27.326811 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:48:27.330556 dbus-daemon[1419]: [system] SELinux support is enabled Dec 13 01:48:27.328339 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:48:27.333929 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:48:27.342002 jq[1434]: true Dec 13 01:48:27.342024 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:48:27.342186 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:48:27.342512 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:48:27.342684 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:48:27.343833 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:48:27.343968 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:48:27.347799 extend-filesystems[1421]: Resized partition /dev/vda9 Dec 13 01:48:27.357980 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:48:27.368608 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:48:27.368645 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1366) Dec 13 01:48:27.375705 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:48:27.375737 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:48:27.377287 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:48:27.377310 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:48:27.381001 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:48:27.382715 jq[1441]: true Dec 13 01:48:27.390627 update_engine[1432]: I20241213 01:48:27.387413 1432 main.cc:92] Flatcar Update Engine starting Dec 13 01:48:27.390627 update_engine[1432]: I20241213 01:48:27.390324 1432 update_check_scheduler.cc:74] Next update check in 8m13s Dec 13 01:48:27.390844 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:48:27.399752 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:48:27.398847 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:48:27.409589 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:48:27.410210 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:48:27.410210 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:48:27.410210 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:48:27.419799 extend-filesystems[1421]: Resized filesystem in /dev/vda9 Dec 13 01:48:27.410403 systemd-logind[1428]: New seat seat0. Dec 13 01:48:27.413789 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:48:27.420828 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:48:27.420996 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:48:27.442815 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:48:27.453545 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:48:27.456193 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:48:27.457875 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:48:27.573265 containerd[1444]: time="2024-12-13T01:48:27.573183494Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:48:27.598173 containerd[1444]: time="2024-12-13T01:48:27.598127694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:27.599564 containerd[1444]: time="2024-12-13T01:48:27.599525454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:48:27.599596 containerd[1444]: time="2024-12-13T01:48:27.599563854Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:48:27.599596 containerd[1444]: time="2024-12-13T01:48:27.599581814Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:48:27.599785 containerd[1444]: time="2024-12-13T01:48:27.599763774Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:48:27.599839 containerd[1444]: time="2024-12-13T01:48:27.599790094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:27.599860 containerd[1444]: time="2024-12-13T01:48:27.599844134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:48:27.599880 containerd[1444]: time="2024-12-13T01:48:27.599856454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:27.600042 containerd[1444]: time="2024-12-13T01:48:27.600016254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:48:27.600042 containerd[1444]: time="2024-12-13T01:48:27.600038894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:27.600089 containerd[1444]: time="2024-12-13T01:48:27.600053174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:48:27.600089 containerd[1444]: time="2024-12-13T01:48:27.600062774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:27.600155 containerd[1444]: time="2024-12-13T01:48:27.600137334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:27.600348 containerd[1444]: time="2024-12-13T01:48:27.600326694Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:48:27.600449 containerd[1444]: time="2024-12-13T01:48:27.600427454Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:48:27.600476 containerd[1444]: time="2024-12-13T01:48:27.600447614Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:48:27.600539 containerd[1444]: time="2024-12-13T01:48:27.600522734Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:48:27.600582 containerd[1444]: time="2024-12-13T01:48:27.600566454Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:48:27.603870 containerd[1444]: time="2024-12-13T01:48:27.603838974Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:48:27.603922 containerd[1444]: time="2024-12-13T01:48:27.603904694Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:48:27.603944 containerd[1444]: time="2024-12-13T01:48:27.603926574Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:48:27.603963 containerd[1444]: time="2024-12-13T01:48:27.603944974Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:48:27.603963 containerd[1444]: time="2024-12-13T01:48:27.603959414Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:48:27.604127 containerd[1444]: time="2024-12-13T01:48:27.604103734Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:48:27.604370 containerd[1444]: time="2024-12-13T01:48:27.604351134Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:48:27.604474 containerd[1444]: time="2024-12-13T01:48:27.604455094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:48:27.604507 containerd[1444]: time="2024-12-13T01:48:27.604478014Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:48:27.604507 containerd[1444]: time="2024-12-13T01:48:27.604491014Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:48:27.604507 containerd[1444]: time="2024-12-13T01:48:27.604504454Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:48:27.604564 containerd[1444]: time="2024-12-13T01:48:27.604518934Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:48:27.604564 containerd[1444]: time="2024-12-13T01:48:27.604531854Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:48:27.604564 containerd[1444]: time="2024-12-13T01:48:27.604546734Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:48:27.604564 containerd[1444]: time="2024-12-13T01:48:27.604560814Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:48:27.604633 containerd[1444]: time="2024-12-13T01:48:27.604574374Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:48:27.604633 containerd[1444]: time="2024-12-13T01:48:27.604586654Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:48:27.604633 containerd[1444]: time="2024-12-13T01:48:27.604598774Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:48:27.604633 containerd[1444]: time="2024-12-13T01:48:27.604619454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604735 containerd[1444]: time="2024-12-13T01:48:27.604633694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604735 containerd[1444]: time="2024-12-13T01:48:27.604666414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604735 containerd[1444]: time="2024-12-13T01:48:27.604678734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604735 containerd[1444]: time="2024-12-13T01:48:27.604690134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604735 containerd[1444]: time="2024-12-13T01:48:27.604713694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604735 containerd[1444]: time="2024-12-13T01:48:27.604725414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604847 containerd[1444]: time="2024-12-13T01:48:27.604738414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604847 containerd[1444]: time="2024-12-13T01:48:27.604751574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604847 containerd[1444]: time="2024-12-13T01:48:27.604765814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604847 containerd[1444]: time="2024-12-13T01:48:27.604777814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604847 containerd[1444]: time="2024-12-13T01:48:27.604790734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604847 containerd[1444]: time="2024-12-13T01:48:27.604803294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604847 containerd[1444]: time="2024-12-13T01:48:27.604819294Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:48:27.604967 containerd[1444]: time="2024-12-13T01:48:27.604849254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604967 containerd[1444]: time="2024-12-13T01:48:27.604863974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.604967 containerd[1444]: time="2024-12-13T01:48:27.604875174Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:48:27.605018 containerd[1444]: time="2024-12-13T01:48:27.604989534Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:48:27.605018 containerd[1444]: time="2024-12-13T01:48:27.605008534Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:48:27.605053 containerd[1444]: time="2024-12-13T01:48:27.605018934Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:48:27.605053 containerd[1444]: time="2024-12-13T01:48:27.605031654Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:48:27.605053 containerd[1444]: time="2024-12-13T01:48:27.605042694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.605106 containerd[1444]: time="2024-12-13T01:48:27.605054814Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:48:27.605106 containerd[1444]: time="2024-12-13T01:48:27.605068174Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:48:27.605106 containerd[1444]: time="2024-12-13T01:48:27.605080174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:48:27.607138 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.605405974Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.605477014Z" level=info msg="Connect containerd service" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.605506174Z" level=info msg="using legacy CRI server" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.605513374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.605602614Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606346814Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606533534Z" level=info msg="Start subscribing containerd event" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606578974Z" level=info msg="Start recovering state" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606843334Z" level=info msg="Start event monitor" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606849814Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606864214Z" level=info msg="Start snapshots syncer" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606891414Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606899974Z" level=info msg="Start streaming server" Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.606904974Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:48:27.607708 containerd[1444]: time="2024-12-13T01:48:27.607042494Z" level=info msg="containerd successfully booted in 0.035033s" Dec 13 01:48:28.332758 systemd-networkd[1392]: eth0: Gained IPv6LL Dec 13 01:48:28.336298 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:48:28.337808 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:48:28.349886 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:48:28.352021 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:48:28.353850 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:48:28.370118 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:48:28.371079 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:48:28.372365 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:48:28.376315 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:48:28.883195 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:28.886996 (kubelet)[1509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:48:29.178274 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:48:29.198103 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:48:29.209883 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:48:29.215723 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:48:29.215901 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:48:29.218857 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:48:29.232634 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:48:29.247905 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:48:29.249712 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:48:29.250791 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:48:29.251553 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:48:29.252493 systemd[1]: Startup finished in 549ms (kernel) + 4.459s (initrd) + 3.620s (userspace) = 8.630s. Dec 13 01:48:29.358528 kubelet[1509]: E1213 01:48:29.358467 1509 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:48:29.360961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:48:29.361124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:48:33.754342 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:48:33.755453 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:47478.service - OpenSSH per-connection server daemon (10.0.0.1:47478). Dec 13 01:48:33.809834 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 47478 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:33.813351 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:33.821569 systemd-logind[1428]: New session 1 of user core. Dec 13 01:48:33.822600 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:48:33.835887 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:48:33.844293 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:48:33.846454 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:48:33.852326 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:48:33.931108 systemd[1543]: Queued start job for default target default.target. Dec 13 01:48:33.940480 systemd[1543]: Created slice app.slice - User Application Slice. Dec 13 01:48:33.940521 systemd[1543]: Reached target paths.target - Paths. Dec 13 01:48:33.940533 systemd[1543]: Reached target timers.target - Timers. Dec 13 01:48:33.941687 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:48:33.950126 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:48:33.950183 systemd[1543]: Reached target sockets.target - Sockets. Dec 13 01:48:33.950194 systemd[1543]: Reached target basic.target - Basic System. Dec 13 01:48:33.950226 systemd[1543]: Reached target default.target - Main User Target. Dec 13 01:48:33.950255 systemd[1543]: Startup finished in 93ms. Dec 13 01:48:33.950515 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:48:33.963801 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:48:34.030466 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:47480.service - OpenSSH per-connection server daemon (10.0.0.1:47480). Dec 13 01:48:34.071193 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 47480 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:34.072384 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:34.076317 systemd-logind[1428]: New session 2 of user core. Dec 13 01:48:34.085847 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:48:34.137994 sshd[1554]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:34.146897 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:47480.service: Deactivated successfully. Dec 13 01:48:34.148231 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:48:34.149431 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:48:34.150514 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:47484.service - OpenSSH per-connection server daemon (10.0.0.1:47484). Dec 13 01:48:34.151264 systemd-logind[1428]: Removed session 2. Dec 13 01:48:34.190670 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 47484 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:34.191809 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:34.195623 systemd-logind[1428]: New session 3 of user core. Dec 13 01:48:34.206786 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:48:34.255479 sshd[1561]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:34.263864 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:47484.service: Deactivated successfully. Dec 13 01:48:34.265237 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:48:34.266466 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:48:34.267535 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:47500.service - OpenSSH per-connection server daemon (10.0.0.1:47500). Dec 13 01:48:34.268177 systemd-logind[1428]: Removed session 3. Dec 13 01:48:34.305971 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 47500 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:34.307024 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:34.310229 systemd-logind[1428]: New session 4 of user core. Dec 13 01:48:34.325786 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:48:34.379812 sshd[1568]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:34.390852 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:47500.service: Deactivated successfully. Dec 13 01:48:34.392198 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:48:34.393373 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:48:34.394400 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:47516.service - OpenSSH per-connection server daemon (10.0.0.1:47516). Dec 13 01:48:34.395276 systemd-logind[1428]: Removed session 4. Dec 13 01:48:34.431823 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 47516 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:34.432953 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:34.436579 systemd-logind[1428]: New session 5 of user core. Dec 13 01:48:34.445809 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:48:34.507677 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:48:34.507945 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:48:34.521517 sudo[1578]: pam_unix(sudo:session): session closed for user root Dec 13 01:48:34.523081 sshd[1575]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:34.535994 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:47516.service: Deactivated successfully. Dec 13 01:48:34.538897 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:48:34.540246 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:48:34.546948 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:47526.service - OpenSSH per-connection server daemon (10.0.0.1:47526). Dec 13 01:48:34.550558 systemd-logind[1428]: Removed session 5. Dec 13 01:48:34.580811 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 47526 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:34.582078 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:34.585628 systemd-logind[1428]: New session 6 of user core. Dec 13 01:48:34.593793 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:48:34.644305 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:48:34.644868 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:48:34.647896 sudo[1587]: pam_unix(sudo:session): session closed for user root Dec 13 01:48:34.652598 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:48:34.653218 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:48:34.669868 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:48:34.671012 auditctl[1590]: No rules Dec 13 01:48:34.671866 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:48:34.672078 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:48:34.673595 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:48:34.696777 augenrules[1608]: No rules Dec 13 01:48:34.697822 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:48:34.698854 sudo[1586]: pam_unix(sudo:session): session closed for user root Dec 13 01:48:34.700410 sshd[1583]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:34.714976 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:47526.service: Deactivated successfully. Dec 13 01:48:34.716280 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:48:34.717687 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:48:34.725943 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:47542.service - OpenSSH per-connection server daemon (10.0.0.1:47542). Dec 13 01:48:34.726840 systemd-logind[1428]: Removed session 6. Dec 13 01:48:34.758669 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 47542 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:48:34.760034 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:48:34.771871 systemd-logind[1428]: New session 7 of user core. Dec 13 01:48:34.779976 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:48:34.833614 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:48:34.833925 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:48:34.859946 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:48:34.879007 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:48:34.879201 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:48:35.319234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:35.332488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:48:35.365647 systemd[1]: Reloading requested from client PID 1659 ('systemctl') (unit session-7.scope)... Dec 13 01:48:35.365668 systemd[1]: Reloading... Dec 13 01:48:35.431680 zram_generator::config[1694]: No configuration found. Dec 13 01:48:35.584145 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:48:35.635375 systemd[1]: Reloading finished in 269 ms. Dec 13 01:48:35.672173 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:48:35.674936 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:48:35.675144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:35.676446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:48:35.766614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:48:35.770928 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:48:35.804215 kubelet[1744]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:35.804215 kubelet[1744]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:48:35.804215 kubelet[1744]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:48:35.804599 kubelet[1744]: I1213 01:48:35.804382 1744 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:48:37.857009 kubelet[1744]: I1213 01:48:37.856958 1744 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:48:37.857009 kubelet[1744]: I1213 01:48:37.856993 1744 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:48:37.857413 kubelet[1744]: I1213 01:48:37.857245 1744 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:48:37.916689 kubelet[1744]: I1213 01:48:37.915031 1744 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:48:37.926057 kubelet[1744]: E1213 01:48:37.926018 1744 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:48:37.926057 kubelet[1744]: I1213 01:48:37.926053 1744 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:48:37.931200 kubelet[1744]: I1213 01:48:37.931161 1744 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:48:37.931929 kubelet[1744]: I1213 01:48:37.931894 1744 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:48:37.932101 kubelet[1744]: I1213 01:48:37.932060 1744 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:48:37.932264 kubelet[1744]: I1213 01:48:37.932090 1744 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.145","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:48:37.932394 kubelet[1744]: I1213 01:48:37.932384 1744 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:48:37.932394 kubelet[1744]: I1213 01:48:37.932395 1744 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:48:37.932592 kubelet[1744]: I1213 01:48:37.932571 1744 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:37.937646 kubelet[1744]: I1213 01:48:37.937608 1744 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:48:37.937708 kubelet[1744]: I1213 01:48:37.937659 1744 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:48:37.937756 kubelet[1744]: I1213 01:48:37.937746 1744 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:48:37.937781 kubelet[1744]: I1213 01:48:37.937759 1744 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:48:37.939551 kubelet[1744]: E1213 01:48:37.939227 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:37.939551 kubelet[1744]: E1213 01:48:37.939382 1744 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:37.939551 kubelet[1744]: I1213 01:48:37.939523 1744 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:48:37.941301 kubelet[1744]: I1213 01:48:37.941268 1744 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:48:37.941964 kubelet[1744]: W1213 01:48:37.941937 1744 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:48:37.942735 kubelet[1744]: I1213 01:48:37.942616 1744 server.go:1269] "Started kubelet" Dec 13 01:48:37.947304 kubelet[1744]: I1213 01:48:37.943510 1744 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:48:37.947304 kubelet[1744]: I1213 01:48:37.943855 1744 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:48:37.947304 kubelet[1744]: I1213 01:48:37.943997 1744 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:48:37.947304 kubelet[1744]: I1213 01:48:37.944048 1744 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:48:37.947304 kubelet[1744]: I1213 01:48:37.945623 1744 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:48:37.947304 kubelet[1744]: I1213 01:48:37.946494 1744 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:48:37.947480 kubelet[1744]: I1213 01:48:37.947424 1744 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:48:37.947560 kubelet[1744]: I1213 01:48:37.947537 1744 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:48:37.947617 kubelet[1744]: I1213 01:48:37.947604 1744 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:48:37.948744 kubelet[1744]: I1213 01:48:37.948721 1744 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:48:37.948831 kubelet[1744]: I1213 01:48:37.948810 1744 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:48:37.949183 kubelet[1744]: E1213 01:48:37.949064 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:37.951027 kubelet[1744]: I1213 01:48:37.950997 1744 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:48:37.956937 kubelet[1744]: E1213 01:48:37.956895 1744 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 01:48:37.957836 kubelet[1744]: E1213 01:48:37.957669 1744 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:48:37.957918 kubelet[1744]: W1213 01:48:37.957851 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:48:37.957918 kubelet[1744]: W1213 01:48:37.957896 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.145" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:48:37.957968 kubelet[1744]: E1213 01:48:37.957940 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.145\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 01:48:37.957968 kubelet[1744]: E1213 01:48:37.957954 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 01:48:37.962644 kubelet[1744]: W1213 01:48:37.962614 1744 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 01:48:37.962726 kubelet[1744]: E1213 01:48:37.962662 1744 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Dec 13 01:48:37.963568 kubelet[1744]: E1213 01:48:37.962615 1744 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.181099658b21211e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2024-12-13 01:48:37.942591774 +0000 UTC m=+2.167672561,LastTimestamp:2024-12-13 01:48:37.942591774 +0000 UTC m=+2.167672561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Dec 13 01:48:37.964890 kubelet[1744]: E1213 01:48:37.964790 1744 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.145.181099658c06e7c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.145,UID:10.0.0.145,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.145,},FirstTimestamp:2024-12-13 01:48:37.957650374 +0000 UTC m=+2.182731161,LastTimestamp:2024-12-13 01:48:37.957650374 +0000 UTC m=+2.182731161,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.145,}" Dec 13 01:48:37.967397 kubelet[1744]: I1213 01:48:37.967181 1744 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:48:37.967397 kubelet[1744]: I1213 01:48:37.967196 1744 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:48:37.967397 kubelet[1744]: I1213 01:48:37.967212 1744 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:48:37.969224 kubelet[1744]: I1213 01:48:37.969189 1744 policy_none.go:49] "None policy: Start" Dec 13 01:48:37.969905 kubelet[1744]: I1213 01:48:37.969886 1744 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:48:37.970009 kubelet[1744]: I1213 01:48:37.969910 1744 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:48:37.976578 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:48:37.987461 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:48:37.990124 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:48:37.994495 kubelet[1744]: I1213 01:48:37.994449 1744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:48:37.995596 kubelet[1744]: I1213 01:48:37.995358 1744 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:48:37.995596 kubelet[1744]: I1213 01:48:37.995385 1744 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:48:37.995596 kubelet[1744]: I1213 01:48:37.995402 1744 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:48:37.995596 kubelet[1744]: E1213 01:48:37.995516 1744 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:48:37.996730 kubelet[1744]: I1213 01:48:37.996703 1744 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:48:37.996905 kubelet[1744]: I1213 01:48:37.996889 1744 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:48:37.996948 kubelet[1744]: I1213 01:48:37.996906 1744 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:48:37.998650 kubelet[1744]: I1213 01:48:37.998588 1744 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:48:38.000444 kubelet[1744]: E1213 01:48:38.000410 1744 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.145\" not found" Dec 13 01:48:38.098613 kubelet[1744]: I1213 01:48:38.098575 1744 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.145" Dec 13 01:48:38.103732 kubelet[1744]: I1213 01:48:38.103711 1744 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.145" Dec 13 01:48:38.103806 kubelet[1744]: E1213 01:48:38.103779 1744 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.145\": node \"10.0.0.145\" not found" Dec 13 01:48:38.121189 kubelet[1744]: E1213 01:48:38.121084 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.222052 kubelet[1744]: E1213 01:48:38.222005 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.299830 sudo[1619]: pam_unix(sudo:session): session closed for user root Dec 13 01:48:38.301437 sshd[1616]: pam_unix(sshd:session): session closed for user core Dec 13 01:48:38.303903 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:47542.service: Deactivated successfully. Dec 13 01:48:38.306091 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:48:38.307513 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:48:38.308590 systemd-logind[1428]: Removed session 7. Dec 13 01:48:38.322342 kubelet[1744]: E1213 01:48:38.322303 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.422896 kubelet[1744]: E1213 01:48:38.422805 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.523621 kubelet[1744]: E1213 01:48:38.523578 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.624128 kubelet[1744]: E1213 01:48:38.624085 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.724698 kubelet[1744]: E1213 01:48:38.724605 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.825182 kubelet[1744]: E1213 01:48:38.825138 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.859684 kubelet[1744]: I1213 01:48:38.859633 1744 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:48:38.859941 kubelet[1744]: W1213 01:48:38.859805 1744 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:48:38.926049 kubelet[1744]: E1213 01:48:38.926012 1744 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.145\" not found" Dec 13 01:48:38.940212 kubelet[1744]: E1213 01:48:38.940180 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:39.027123 kubelet[1744]: I1213 01:48:39.027098 1744 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:48:39.027412 containerd[1444]: time="2024-12-13T01:48:39.027363974Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:48:39.027783 kubelet[1744]: I1213 01:48:39.027512 1744 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:48:39.940473 kubelet[1744]: E1213 01:48:39.940423 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:39.940473 kubelet[1744]: I1213 01:48:39.940442 1744 apiserver.go:52] "Watching apiserver" Dec 13 01:48:39.943918 kubelet[1744]: E1213 01:48:39.943692 1744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cxgk2" podUID="10877efe-9146-4b36-8bbb-f15ba78d288c" Dec 13 01:48:39.948274 kubelet[1744]: I1213 01:48:39.948233 1744 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:48:39.953283 systemd[1]: Created slice kubepods-besteffort-pod62830669_e81f_4227_8fcd_526f5af0867e.slice - libcontainer container kubepods-besteffort-pod62830669_e81f_4227_8fcd_526f5af0867e.slice. Dec 13 01:48:39.958143 kubelet[1744]: I1213 01:48:39.958101 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-cni-log-dir\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958143 kubelet[1744]: I1213 01:48:39.958141 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-xtables-lock\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958261 kubelet[1744]: I1213 01:48:39.958160 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-tigera-ca-bundle\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958261 kubelet[1744]: I1213 01:48:39.958176 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-node-certs\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958261 kubelet[1744]: I1213 01:48:39.958191 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-var-run-calico\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958261 kubelet[1744]: I1213 01:48:39.958206 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7vz\" (UniqueName: \"kubernetes.io/projected/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-kube-api-access-pt7vz\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958261 kubelet[1744]: I1213 01:48:39.958222 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/10877efe-9146-4b36-8bbb-f15ba78d288c-kubelet-dir\") pod \"csi-node-driver-cxgk2\" (UID: \"10877efe-9146-4b36-8bbb-f15ba78d288c\") " pod="calico-system/csi-node-driver-cxgk2" Dec 13 01:48:39.958361 kubelet[1744]: I1213 01:48:39.958239 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62830669-e81f-4227-8fcd-526f5af0867e-lib-modules\") pod \"kube-proxy-25lbm\" (UID: \"62830669-e81f-4227-8fcd-526f5af0867e\") " pod="kube-system/kube-proxy-25lbm" Dec 13 01:48:39.958361 kubelet[1744]: I1213 01:48:39.958255 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-lib-modules\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958361 kubelet[1744]: I1213 01:48:39.958269 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62830669-e81f-4227-8fcd-526f5af0867e-kube-proxy\") pod \"kube-proxy-25lbm\" (UID: \"62830669-e81f-4227-8fcd-526f5af0867e\") " pod="kube-system/kube-proxy-25lbm" Dec 13 01:48:39.958361 kubelet[1744]: I1213 01:48:39.958283 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-policysync\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958361 kubelet[1744]: I1213 01:48:39.958298 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-var-lib-calico\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958454 kubelet[1744]: I1213 01:48:39.958312 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-cni-bin-dir\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958454 kubelet[1744]: I1213 01:48:39.958325 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-cni-net-dir\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958454 kubelet[1744]: I1213 01:48:39.958341 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5078edd-43b0-49f9-bfab-c9ad69e5ecae-flexvol-driver-host\") pod \"calico-node-ld8fw\" (UID: \"f5078edd-43b0-49f9-bfab-c9ad69e5ecae\") " pod="calico-system/calico-node-ld8fw" Dec 13 01:48:39.958454 kubelet[1744]: I1213 01:48:39.958356 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/10877efe-9146-4b36-8bbb-f15ba78d288c-varrun\") pod \"csi-node-driver-cxgk2\" (UID: \"10877efe-9146-4b36-8bbb-f15ba78d288c\") " pod="calico-system/csi-node-driver-cxgk2" Dec 13 01:48:39.958454 kubelet[1744]: I1213 01:48:39.958371 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62830669-e81f-4227-8fcd-526f5af0867e-xtables-lock\") pod \"kube-proxy-25lbm\" (UID: \"62830669-e81f-4227-8fcd-526f5af0867e\") " pod="kube-system/kube-proxy-25lbm" Dec 13 01:48:39.958550 kubelet[1744]: I1213 01:48:39.958388 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4srx\" (UniqueName: \"kubernetes.io/projected/62830669-e81f-4227-8fcd-526f5af0867e-kube-api-access-z4srx\") pod \"kube-proxy-25lbm\" (UID: \"62830669-e81f-4227-8fcd-526f5af0867e\") " pod="kube-system/kube-proxy-25lbm" Dec 13 01:48:39.958550 kubelet[1744]: I1213 01:48:39.958402 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf7j4\" (UniqueName: \"kubernetes.io/projected/10877efe-9146-4b36-8bbb-f15ba78d288c-kube-api-access-rf7j4\") pod \"csi-node-driver-cxgk2\" (UID: \"10877efe-9146-4b36-8bbb-f15ba78d288c\") " pod="calico-system/csi-node-driver-cxgk2" Dec 13 01:48:39.958550 kubelet[1744]: I1213 01:48:39.958416 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/10877efe-9146-4b36-8bbb-f15ba78d288c-socket-dir\") pod \"csi-node-driver-cxgk2\" (UID: \"10877efe-9146-4b36-8bbb-f15ba78d288c\") " pod="calico-system/csi-node-driver-cxgk2" Dec 13 01:48:39.958550 kubelet[1744]: I1213 01:48:39.958430 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/10877efe-9146-4b36-8bbb-f15ba78d288c-registration-dir\") pod \"csi-node-driver-cxgk2\" (UID: \"10877efe-9146-4b36-8bbb-f15ba78d288c\") " pod="calico-system/csi-node-driver-cxgk2" Dec 13 01:48:39.965829 systemd[1]: Created slice kubepods-besteffort-podf5078edd_43b0_49f9_bfab_c9ad69e5ecae.slice - libcontainer container kubepods-besteffort-podf5078edd_43b0_49f9_bfab_c9ad69e5ecae.slice. Dec 13 01:48:40.062708 kubelet[1744]: E1213 01:48:40.062681 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:40.062880 kubelet[1744]: W1213 01:48:40.062808 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:40.062880 kubelet[1744]: E1213 01:48:40.062839 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:40.071150 kubelet[1744]: E1213 01:48:40.071012 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:40.071150 kubelet[1744]: W1213 01:48:40.071029 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:40.071150 kubelet[1744]: E1213 01:48:40.071044 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:40.071531 kubelet[1744]: E1213 01:48:40.071517 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:40.071666 kubelet[1744]: W1213 01:48:40.071598 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:40.071666 kubelet[1744]: E1213 01:48:40.071621 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:40.072888 kubelet[1744]: E1213 01:48:40.072853 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:40.072888 kubelet[1744]: W1213 01:48:40.072872 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:40.072888 kubelet[1744]: E1213 01:48:40.072887 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:40.263986 kubelet[1744]: E1213 01:48:40.263949 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:40.264731 containerd[1444]: time="2024-12-13T01:48:40.264685694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25lbm,Uid:62830669-e81f-4227-8fcd-526f5af0867e,Namespace:kube-system,Attempt:0,}" Dec 13 01:48:40.269597 kubelet[1744]: E1213 01:48:40.269562 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:40.270037 containerd[1444]: time="2024-12-13T01:48:40.269996814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ld8fw,Uid:f5078edd-43b0-49f9-bfab-c9ad69e5ecae,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:40.772804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300444067.mount: Deactivated successfully. Dec 13 01:48:40.778653 containerd[1444]: time="2024-12-13T01:48:40.778600494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:40.779692 containerd[1444]: time="2024-12-13T01:48:40.779613814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:48:40.780938 containerd[1444]: time="2024-12-13T01:48:40.780906214Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:40.782677 containerd[1444]: time="2024-12-13T01:48:40.781771134Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:40.782677 containerd[1444]: time="2024-12-13T01:48:40.782070094Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:48:40.784516 containerd[1444]: time="2024-12-13T01:48:40.784475934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:48:40.786299 containerd[1444]: time="2024-12-13T01:48:40.786265454Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.49576ms" Dec 13 01:48:40.787766 containerd[1444]: time="2024-12-13T01:48:40.787727934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.63972ms" Dec 13 01:48:40.899136 containerd[1444]: time="2024-12-13T01:48:40.899014654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:40.899136 containerd[1444]: time="2024-12-13T01:48:40.899086614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:40.899342 containerd[1444]: time="2024-12-13T01:48:40.899112334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:40.899342 containerd[1444]: time="2024-12-13T01:48:40.899212254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:40.900061 containerd[1444]: time="2024-12-13T01:48:40.899988214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:48:40.900149 containerd[1444]: time="2024-12-13T01:48:40.900053334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:48:40.900149 containerd[1444]: time="2024-12-13T01:48:40.900088774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:40.900277 containerd[1444]: time="2024-12-13T01:48:40.900166334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:48:40.940838 kubelet[1744]: E1213 01:48:40.940798 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:40.999801 systemd[1]: Started cri-containerd-95a8ba3b2b525e1529a0be1d05ac55be468d8f6ebf3492be318936beae72166b.scope - libcontainer container 95a8ba3b2b525e1529a0be1d05ac55be468d8f6ebf3492be318936beae72166b. Dec 13 01:48:41.001238 systemd[1]: Started cri-containerd-b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881.scope - libcontainer container b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881. Dec 13 01:48:41.021458 containerd[1444]: time="2024-12-13T01:48:41.021297414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-25lbm,Uid:62830669-e81f-4227-8fcd-526f5af0867e,Namespace:kube-system,Attempt:0,} returns sandbox id \"95a8ba3b2b525e1529a0be1d05ac55be468d8f6ebf3492be318936beae72166b\"" Dec 13 01:48:41.022680 kubelet[1744]: E1213 01:48:41.022397 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:41.023799 containerd[1444]: time="2024-12-13T01:48:41.023725894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:48:41.023799 containerd[1444]: time="2024-12-13T01:48:41.023752254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ld8fw,Uid:f5078edd-43b0-49f9-bfab-c9ad69e5ecae,Namespace:calico-system,Attempt:0,} returns sandbox id \"b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881\"" Dec 13 01:48:41.025219 kubelet[1744]: E1213 01:48:41.025192 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:41.940945 kubelet[1744]: E1213 01:48:41.940897 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:41.996009 kubelet[1744]: E1213 01:48:41.995953 1744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cxgk2" podUID="10877efe-9146-4b36-8bbb-f15ba78d288c" Dec 13 01:48:42.068673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855125453.mount: Deactivated successfully. Dec 13 01:48:42.274256 containerd[1444]: time="2024-12-13T01:48:42.274208734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:42.274568 containerd[1444]: time="2024-12-13T01:48:42.274527054Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Dec 13 01:48:42.275469 containerd[1444]: time="2024-12-13T01:48:42.275437214Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:42.277672 containerd[1444]: time="2024-12-13T01:48:42.277623174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:42.278337 containerd[1444]: time="2024-12-13T01:48:42.278303614Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.25453728s" Dec 13 01:48:42.278361 containerd[1444]: time="2024-12-13T01:48:42.278346294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 01:48:42.279573 containerd[1444]: time="2024-12-13T01:48:42.279550254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:48:42.280655 containerd[1444]: time="2024-12-13T01:48:42.280588454Z" level=info msg="CreateContainer within sandbox \"95a8ba3b2b525e1529a0be1d05ac55be468d8f6ebf3492be318936beae72166b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:48:42.291818 containerd[1444]: time="2024-12-13T01:48:42.291776934Z" level=info msg="CreateContainer within sandbox \"95a8ba3b2b525e1529a0be1d05ac55be468d8f6ebf3492be318936beae72166b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9c602bdca453f18c102ca80569f62ff10a0f2bf2a4106b2d9f3d6c5a1f21b089\"" Dec 13 01:48:42.292314 containerd[1444]: time="2024-12-13T01:48:42.292249214Z" level=info msg="StartContainer for \"9c602bdca453f18c102ca80569f62ff10a0f2bf2a4106b2d9f3d6c5a1f21b089\"" Dec 13 01:48:42.321855 systemd[1]: Started cri-containerd-9c602bdca453f18c102ca80569f62ff10a0f2bf2a4106b2d9f3d6c5a1f21b089.scope - libcontainer container 9c602bdca453f18c102ca80569f62ff10a0f2bf2a4106b2d9f3d6c5a1f21b089. Dec 13 01:48:42.344468 containerd[1444]: time="2024-12-13T01:48:42.344425774Z" level=info msg="StartContainer for \"9c602bdca453f18c102ca80569f62ff10a0f2bf2a4106b2d9f3d6c5a1f21b089\" returns successfully" Dec 13 01:48:42.941578 kubelet[1744]: E1213 01:48:42.941529 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:43.010262 kubelet[1744]: E1213 01:48:43.010214 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:43.019688 kubelet[1744]: I1213 01:48:43.019609 1744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-25lbm" podStartSLOduration=3.763502174 podStartE2EDuration="5.019598494s" podCreationTimestamp="2024-12-13 01:48:38 +0000 UTC" firstStartedPulling="2024-12-13 01:48:41.023318534 +0000 UTC m=+5.248399321" lastFinishedPulling="2024-12-13 01:48:42.279414774 +0000 UTC m=+6.504495641" observedRunningTime="2024-12-13 01:48:43.018996894 +0000 UTC m=+7.244077681" watchObservedRunningTime="2024-12-13 01:48:43.019598494 +0000 UTC m=+7.244679281" Dec 13 01:48:43.067197 kubelet[1744]: E1213 01:48:43.067160 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.067197 kubelet[1744]: W1213 01:48:43.067182 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.067197 kubelet[1744]: E1213 01:48:43.067202 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.067425 kubelet[1744]: E1213 01:48:43.067401 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.067425 kubelet[1744]: W1213 01:48:43.067413 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.067483 kubelet[1744]: E1213 01:48:43.067427 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.067755 kubelet[1744]: E1213 01:48:43.067583 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.067755 kubelet[1744]: W1213 01:48:43.067596 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.067755 kubelet[1744]: E1213 01:48:43.067605 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.068843 systemd[1]: run-containerd-runc-k8s.io-9c602bdca453f18c102ca80569f62ff10a0f2bf2a4106b2d9f3d6c5a1f21b089-runc.sUhw8e.mount: Deactivated successfully. Dec 13 01:48:43.069543 kubelet[1744]: E1213 01:48:43.069426 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.069543 kubelet[1744]: W1213 01:48:43.069443 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.069543 kubelet[1744]: E1213 01:48:43.069459 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.069708 kubelet[1744]: E1213 01:48:43.069650 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.069708 kubelet[1744]: W1213 01:48:43.069661 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.069708 kubelet[1744]: E1213 01:48:43.069672 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.069831 kubelet[1744]: E1213 01:48:43.069814 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.069831 kubelet[1744]: W1213 01:48:43.069828 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.069902 kubelet[1744]: E1213 01:48:43.069838 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.070050 kubelet[1744]: E1213 01:48:43.070035 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.070095 kubelet[1744]: W1213 01:48:43.070053 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.070095 kubelet[1744]: E1213 01:48:43.070065 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.070270 kubelet[1744]: E1213 01:48:43.070252 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.070270 kubelet[1744]: W1213 01:48:43.070264 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.070337 kubelet[1744]: E1213 01:48:43.070274 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.070439 kubelet[1744]: E1213 01:48:43.070429 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.070439 kubelet[1744]: W1213 01:48:43.070440 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.070508 kubelet[1744]: E1213 01:48:43.070447 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.070576 kubelet[1744]: E1213 01:48:43.070565 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.070576 kubelet[1744]: W1213 01:48:43.070576 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.070665 kubelet[1744]: E1213 01:48:43.070583 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.070724 kubelet[1744]: E1213 01:48:43.070712 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.070724 kubelet[1744]: W1213 01:48:43.070723 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.070796 kubelet[1744]: E1213 01:48:43.070731 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.070865 kubelet[1744]: E1213 01:48:43.070852 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.070865 kubelet[1744]: W1213 01:48:43.070861 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.070927 kubelet[1744]: E1213 01:48:43.070871 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.071014 kubelet[1744]: E1213 01:48:43.071003 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.071014 kubelet[1744]: W1213 01:48:43.071011 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.071075 kubelet[1744]: E1213 01:48:43.071018 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.071151 kubelet[1744]: E1213 01:48:43.071141 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.071151 kubelet[1744]: W1213 01:48:43.071150 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.071201 kubelet[1744]: E1213 01:48:43.071178 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.071306 kubelet[1744]: E1213 01:48:43.071296 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.071306 kubelet[1744]: W1213 01:48:43.071306 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.071377 kubelet[1744]: E1213 01:48:43.071313 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.071437 kubelet[1744]: E1213 01:48:43.071428 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.071437 kubelet[1744]: W1213 01:48:43.071436 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.071497 kubelet[1744]: E1213 01:48:43.071444 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.071577 kubelet[1744]: E1213 01:48:43.071567 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.071577 kubelet[1744]: W1213 01:48:43.071576 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.071625 kubelet[1744]: E1213 01:48:43.071583 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.071752 kubelet[1744]: E1213 01:48:43.071741 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.071752 kubelet[1744]: W1213 01:48:43.071750 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.071826 kubelet[1744]: E1213 01:48:43.071758 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.071886 kubelet[1744]: E1213 01:48:43.071876 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.071886 kubelet[1744]: W1213 01:48:43.071885 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.071942 kubelet[1744]: E1213 01:48:43.071892 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.072012 kubelet[1744]: E1213 01:48:43.072002 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.072012 kubelet[1744]: W1213 01:48:43.072011 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.072079 kubelet[1744]: E1213 01:48:43.072018 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.075490 kubelet[1744]: E1213 01:48:43.075363 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.075490 kubelet[1744]: W1213 01:48:43.075384 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.075490 kubelet[1744]: E1213 01:48:43.075398 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.075714 kubelet[1744]: E1213 01:48:43.075700 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.075798 kubelet[1744]: W1213 01:48:43.075754 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.075911 kubelet[1744]: E1213 01:48:43.075838 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.076198 kubelet[1744]: E1213 01:48:43.076130 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.076198 kubelet[1744]: W1213 01:48:43.076144 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.076198 kubelet[1744]: E1213 01:48:43.076159 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.076625 kubelet[1744]: E1213 01:48:43.076483 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.076625 kubelet[1744]: W1213 01:48:43.076495 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.076625 kubelet[1744]: E1213 01:48:43.076566 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.076979 kubelet[1744]: E1213 01:48:43.076887 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.076979 kubelet[1744]: W1213 01:48:43.076901 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.076979 kubelet[1744]: E1213 01:48:43.076956 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.077255 kubelet[1744]: E1213 01:48:43.077164 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.077255 kubelet[1744]: W1213 01:48:43.077176 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.077255 kubelet[1744]: E1213 01:48:43.077194 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.077930 kubelet[1744]: E1213 01:48:43.077802 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.077930 kubelet[1744]: W1213 01:48:43.077818 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.077930 kubelet[1744]: E1213 01:48:43.077835 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.078050 kubelet[1744]: E1213 01:48:43.077989 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.078050 kubelet[1744]: W1213 01:48:43.077997 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.078090 kubelet[1744]: E1213 01:48:43.078066 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.078337 kubelet[1744]: E1213 01:48:43.078321 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.078337 kubelet[1744]: W1213 01:48:43.078334 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.078416 kubelet[1744]: E1213 01:48:43.078399 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.078546 kubelet[1744]: E1213 01:48:43.078534 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.078788 kubelet[1744]: W1213 01:48:43.078612 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.078788 kubelet[1744]: E1213 01:48:43.078661 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.078921 kubelet[1744]: E1213 01:48:43.078907 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.078921 kubelet[1744]: W1213 01:48:43.078919 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.078971 kubelet[1744]: E1213 01:48:43.078930 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.079233 kubelet[1744]: E1213 01:48:43.079218 1744 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:48:43.079297 kubelet[1744]: W1213 01:48:43.079285 1744 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:48:43.079344 kubelet[1744]: E1213 01:48:43.079335 1744 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:48:43.129436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871908200.mount: Deactivated successfully. Dec 13 01:48:43.182132 containerd[1444]: time="2024-12-13T01:48:43.182078454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:43.182673 containerd[1444]: time="2024-12-13T01:48:43.182630214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Dec 13 01:48:43.183309 containerd[1444]: time="2024-12-13T01:48:43.183261174Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:43.185161 containerd[1444]: time="2024-12-13T01:48:43.185128374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:43.185993 containerd[1444]: time="2024-12-13T01:48:43.185869054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 906.28668ms" Dec 13 01:48:43.185993 containerd[1444]: time="2024-12-13T01:48:43.185902054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:48:43.187751 containerd[1444]: time="2024-12-13T01:48:43.187721894Z" level=info msg="CreateContainer within sandbox \"b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:48:43.200446 containerd[1444]: time="2024-12-13T01:48:43.200307254Z" level=info msg="CreateContainer within sandbox \"b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"25f32d14de5bddcf33f0e08297567dcd9bed607873c4a577307750df9cba7a4c\"" Dec 13 01:48:43.201014 containerd[1444]: time="2024-12-13T01:48:43.200947534Z" level=info msg="StartContainer for \"25f32d14de5bddcf33f0e08297567dcd9bed607873c4a577307750df9cba7a4c\"" Dec 13 01:48:43.223789 systemd[1]: Started cri-containerd-25f32d14de5bddcf33f0e08297567dcd9bed607873c4a577307750df9cba7a4c.scope - libcontainer container 25f32d14de5bddcf33f0e08297567dcd9bed607873c4a577307750df9cba7a4c. Dec 13 01:48:43.242160 containerd[1444]: time="2024-12-13T01:48:43.242117654Z" level=info msg="StartContainer for \"25f32d14de5bddcf33f0e08297567dcd9bed607873c4a577307750df9cba7a4c\" returns successfully" Dec 13 01:48:43.262993 systemd[1]: cri-containerd-25f32d14de5bddcf33f0e08297567dcd9bed607873c4a577307750df9cba7a4c.scope: Deactivated successfully. Dec 13 01:48:43.419463 containerd[1444]: time="2024-12-13T01:48:43.419410094Z" level=info msg="shim disconnected" id=25f32d14de5bddcf33f0e08297567dcd9bed607873c4a577307750df9cba7a4c namespace=k8s.io Dec 13 01:48:43.420075 containerd[1444]: time="2024-12-13T01:48:43.419881094Z" level=warning msg="cleaning up after shim disconnected" id=25f32d14de5bddcf33f0e08297567dcd9bed607873c4a577307750df9cba7a4c namespace=k8s.io Dec 13 01:48:43.420075 containerd[1444]: time="2024-12-13T01:48:43.419902374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:48:43.942081 kubelet[1744]: E1213 01:48:43.942040 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:43.995762 kubelet[1744]: E1213 01:48:43.995683 1744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cxgk2" podUID="10877efe-9146-4b36-8bbb-f15ba78d288c" Dec 13 01:48:44.012574 kubelet[1744]: E1213 01:48:44.012428 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:44.012574 kubelet[1744]: E1213 01:48:44.012500 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:44.013247 containerd[1444]: time="2024-12-13T01:48:44.013077814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:48:44.942550 kubelet[1744]: E1213 01:48:44.942481 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:45.804352 containerd[1444]: time="2024-12-13T01:48:45.804296214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:45.805303 containerd[1444]: time="2024-12-13T01:48:45.805274894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:48:45.806091 containerd[1444]: time="2024-12-13T01:48:45.806064814Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:45.808591 containerd[1444]: time="2024-12-13T01:48:45.808546894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:45.809844 containerd[1444]: time="2024-12-13T01:48:45.809810054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.79669476s" Dec 13 01:48:45.809887 containerd[1444]: time="2024-12-13T01:48:45.809845334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:48:45.811589 containerd[1444]: time="2024-12-13T01:48:45.811562014Z" level=info msg="CreateContainer within sandbox \"b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:48:45.823296 containerd[1444]: time="2024-12-13T01:48:45.823257334Z" level=info msg="CreateContainer within sandbox \"b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323\"" Dec 13 01:48:45.823900 containerd[1444]: time="2024-12-13T01:48:45.823862894Z" level=info msg="StartContainer for \"6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323\"" Dec 13 01:48:45.849874 systemd[1]: Started cri-containerd-6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323.scope - libcontainer container 6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323. Dec 13 01:48:45.873113 containerd[1444]: time="2024-12-13T01:48:45.872084654Z" level=info msg="StartContainer for \"6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323\" returns successfully" Dec 13 01:48:45.942742 kubelet[1744]: E1213 01:48:45.942688 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:45.996920 kubelet[1744]: E1213 01:48:45.996864 1744 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cxgk2" podUID="10877efe-9146-4b36-8bbb-f15ba78d288c" Dec 13 01:48:46.017606 kubelet[1744]: E1213 01:48:46.017580 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:46.302327 containerd[1444]: time="2024-12-13T01:48:46.302145814Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:48:46.303900 systemd[1]: cri-containerd-6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323.scope: Deactivated successfully. Dec 13 01:48:46.319173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323-rootfs.mount: Deactivated successfully. Dec 13 01:48:46.353995 kubelet[1744]: I1213 01:48:46.353966 1744 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:48:46.550673 containerd[1444]: time="2024-12-13T01:48:46.550551734Z" level=info msg="shim disconnected" id=6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323 namespace=k8s.io Dec 13 01:48:46.550673 containerd[1444]: time="2024-12-13T01:48:46.550602934Z" level=warning msg="cleaning up after shim disconnected" id=6b8e4a0acd291f5685103c0846cc34d6476cfd5b83d458699921df497ac0c323 namespace=k8s.io Dec 13 01:48:46.550673 containerd[1444]: time="2024-12-13T01:48:46.550614934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:48:46.943045 kubelet[1744]: E1213 01:48:46.942997 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:47.020202 kubelet[1744]: E1213 01:48:47.020159 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:47.020975 containerd[1444]: time="2024-12-13T01:48:47.020774854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:48:47.943166 kubelet[1744]: E1213 01:48:47.943118 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:48.001316 systemd[1]: Created slice kubepods-besteffort-pod10877efe_9146_4b36_8bbb_f15ba78d288c.slice - libcontainer container kubepods-besteffort-pod10877efe_9146_4b36_8bbb_f15ba78d288c.slice. Dec 13 01:48:48.007724 containerd[1444]: time="2024-12-13T01:48:48.007690094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cxgk2,Uid:10877efe-9146-4b36-8bbb-f15ba78d288c,Namespace:calico-system,Attempt:0,}" Dec 13 01:48:48.130327 containerd[1444]: time="2024-12-13T01:48:48.130224294Z" level=error msg="Failed to destroy network for sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:48.130627 containerd[1444]: time="2024-12-13T01:48:48.130543414Z" level=error msg="encountered an error cleaning up failed sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:48.130627 containerd[1444]: time="2024-12-13T01:48:48.130586774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cxgk2,Uid:10877efe-9146-4b36-8bbb-f15ba78d288c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:48.131268 kubelet[1744]: E1213 01:48:48.130929 1744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:48.131268 kubelet[1744]: E1213 01:48:48.130990 1744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cxgk2" Dec 13 01:48:48.131268 kubelet[1744]: E1213 01:48:48.131011 1744 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cxgk2" Dec 13 01:48:48.131488 kubelet[1744]: E1213 01:48:48.131048 1744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cxgk2_calico-system(10877efe-9146-4b36-8bbb-f15ba78d288c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cxgk2_calico-system(10877efe-9146-4b36-8bbb-f15ba78d288c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cxgk2" podUID="10877efe-9146-4b36-8bbb-f15ba78d288c" Dec 13 01:48:48.131781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7-shm.mount: Deactivated successfully. Dec 13 01:48:48.943299 kubelet[1744]: E1213 01:48:48.943244 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:49.025680 kubelet[1744]: I1213 01:48:49.025132 1744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Dec 13 01:48:49.025901 containerd[1444]: time="2024-12-13T01:48:49.025868334Z" level=info msg="StopPodSandbox for \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\"" Dec 13 01:48:49.026244 containerd[1444]: time="2024-12-13T01:48:49.026029054Z" level=info msg="Ensure that sandbox 52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7 in task-service has been cleanup successfully" Dec 13 01:48:49.050648 containerd[1444]: time="2024-12-13T01:48:49.050567414Z" level=error msg="StopPodSandbox for \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\" failed" error="failed to destroy network for sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:49.051079 kubelet[1744]: E1213 01:48:49.050892 1744 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Dec 13 01:48:49.051079 kubelet[1744]: E1213 01:48:49.050952 1744 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7"} Dec 13 01:48:49.051079 kubelet[1744]: E1213 01:48:49.051017 1744 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10877efe-9146-4b36-8bbb-f15ba78d288c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:49.051079 kubelet[1744]: E1213 01:48:49.051038 1744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10877efe-9146-4b36-8bbb-f15ba78d288c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cxgk2" podUID="10877efe-9146-4b36-8bbb-f15ba78d288c" Dec 13 01:48:49.506200 systemd[1]: Created slice kubepods-besteffort-podff173915_6b58_4024_b15f_a31f4dae6816.slice - libcontainer container kubepods-besteffort-podff173915_6b58_4024_b15f_a31f4dae6816.slice. Dec 13 01:48:49.618665 kubelet[1744]: I1213 01:48:49.618512 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knpg7\" (UniqueName: \"kubernetes.io/projected/ff173915-6b58-4024-b15f-a31f4dae6816-kube-api-access-knpg7\") pod \"nginx-deployment-8587fbcb89-9nlj6\" (UID: \"ff173915-6b58-4024-b15f-a31f4dae6816\") " pod="default/nginx-deployment-8587fbcb89-9nlj6" Dec 13 01:48:49.689701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196680693.mount: Deactivated successfully. Dec 13 01:48:49.810250 containerd[1444]: time="2024-12-13T01:48:49.810141694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9nlj6,Uid:ff173915-6b58-4024-b15f-a31f4dae6816,Namespace:default,Attempt:0,}" Dec 13 01:48:49.943961 kubelet[1744]: E1213 01:48:49.943917 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:49.951427 containerd[1444]: time="2024-12-13T01:48:49.951383094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:49.952698 containerd[1444]: time="2024-12-13T01:48:49.952589574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:48:49.953420 containerd[1444]: time="2024-12-13T01:48:49.953351374Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:49.957597 containerd[1444]: time="2024-12-13T01:48:49.957550494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:48:49.958229 containerd[1444]: time="2024-12-13T01:48:49.958089734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 2.93727992s" Dec 13 01:48:49.958229 containerd[1444]: time="2024-12-13T01:48:49.958123574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:48:49.968462 containerd[1444]: time="2024-12-13T01:48:49.967363934Z" level=info msg="CreateContainer within sandbox \"b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:48:49.979554 containerd[1444]: time="2024-12-13T01:48:49.979454414Z" level=info msg="CreateContainer within sandbox \"b310048e3f78a6afa3a05b7555c7a368e0f014d3836134fbb9083be8b5be8881\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5d6212ef638450bbca30bea1f985b376356fcb6d169fe662c9399c78a0dde276\"" Dec 13 01:48:49.981107 containerd[1444]: time="2024-12-13T01:48:49.980230134Z" level=info msg="StartContainer for \"5d6212ef638450bbca30bea1f985b376356fcb6d169fe662c9399c78a0dde276\"" Dec 13 01:48:50.008072 containerd[1444]: time="2024-12-13T01:48:50.008024134Z" level=error msg="Failed to destroy network for sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:50.008367 containerd[1444]: time="2024-12-13T01:48:50.008342654Z" level=error msg="encountered an error cleaning up failed sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:50.008419 containerd[1444]: time="2024-12-13T01:48:50.008396414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9nlj6,Uid:ff173915-6b58-4024-b15f-a31f4dae6816,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:50.008991 kubelet[1744]: E1213 01:48:50.008563 1744 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:50.008991 kubelet[1744]: E1213 01:48:50.008682 1744 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9nlj6" Dec 13 01:48:50.008991 kubelet[1744]: E1213 01:48:50.008699 1744 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9nlj6" Dec 13 01:48:50.008808 systemd[1]: Started cri-containerd-5d6212ef638450bbca30bea1f985b376356fcb6d169fe662c9399c78a0dde276.scope - libcontainer container 5d6212ef638450bbca30bea1f985b376356fcb6d169fe662c9399c78a0dde276. Dec 13 01:48:50.009165 kubelet[1744]: E1213 01:48:50.008744 1744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9nlj6_default(ff173915-6b58-4024-b15f-a31f4dae6816)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9nlj6_default(ff173915-6b58-4024-b15f-a31f4dae6816)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9nlj6" podUID="ff173915-6b58-4024-b15f-a31f4dae6816" Dec 13 01:48:50.027901 kubelet[1744]: I1213 01:48:50.027875 1744 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Dec 13 01:48:50.029194 containerd[1444]: time="2024-12-13T01:48:50.028892694Z" level=info msg="StopPodSandbox for \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\"" Dec 13 01:48:50.030423 containerd[1444]: time="2024-12-13T01:48:50.030362774Z" level=info msg="Ensure that sandbox eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f in task-service has been cleanup successfully" Dec 13 01:48:50.034160 containerd[1444]: time="2024-12-13T01:48:50.034124454Z" level=info msg="StartContainer for \"5d6212ef638450bbca30bea1f985b376356fcb6d169fe662c9399c78a0dde276\" returns successfully" Dec 13 01:48:50.059271 containerd[1444]: time="2024-12-13T01:48:50.059162694Z" level=error msg="StopPodSandbox for \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\" failed" error="failed to destroy network for sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:48:50.060115 kubelet[1744]: E1213 01:48:50.059980 1744 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Dec 13 01:48:50.060115 kubelet[1744]: E1213 01:48:50.060032 1744 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f"} Dec 13 01:48:50.060115 kubelet[1744]: E1213 01:48:50.060065 1744 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff173915-6b58-4024-b15f-a31f4dae6816\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:48:50.060115 kubelet[1744]: E1213 01:48:50.060086 1744 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff173915-6b58-4024-b15f-a31f4dae6816\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9nlj6" podUID="ff173915-6b58-4024-b15f-a31f4dae6816" Dec 13 01:48:50.171395 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:48:50.171519 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:48:50.688676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f-shm.mount: Deactivated successfully. Dec 13 01:48:50.944548 kubelet[1744]: E1213 01:48:50.944431 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:51.041228 kubelet[1744]: E1213 01:48:51.041186 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:51.057826 kubelet[1744]: I1213 01:48:51.057762 1744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ld8fw" podStartSLOduration=4.123927534 podStartE2EDuration="13.057743574s" podCreationTimestamp="2024-12-13 01:48:38 +0000 UTC" firstStartedPulling="2024-12-13 01:48:41.025731454 +0000 UTC m=+5.250812241" lastFinishedPulling="2024-12-13 01:48:49.959547494 +0000 UTC m=+14.184628281" observedRunningTime="2024-12-13 01:48:51.057469134 +0000 UTC m=+15.282549921" watchObservedRunningTime="2024-12-13 01:48:51.057743574 +0000 UTC m=+15.282824321" Dec 13 01:48:51.506689 kernel: bpftool[2550]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:48:51.652456 systemd-networkd[1392]: vxlan.calico: Link UP Dec 13 01:48:51.652461 systemd-networkd[1392]: vxlan.calico: Gained carrier Dec 13 01:48:51.944964 kubelet[1744]: E1213 01:48:51.944808 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:52.042476 kubelet[1744]: I1213 01:48:52.042442 1744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:52.042838 kubelet[1744]: E1213 01:48:52.042821 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:52.945353 kubelet[1744]: E1213 01:48:52.945287 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:53.676781 systemd-networkd[1392]: vxlan.calico: Gained IPv6LL Dec 13 01:48:53.946556 kubelet[1744]: E1213 01:48:53.946434 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:54.295481 kubelet[1744]: I1213 01:48:54.293698 1744 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:48:54.295481 kubelet[1744]: E1213 01:48:54.294083 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:48:54.947055 kubelet[1744]: E1213 01:48:54.947014 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:55.947662 kubelet[1744]: E1213 01:48:55.947606 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:56.948603 kubelet[1744]: E1213 01:48:56.948520 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:57.938500 kubelet[1744]: E1213 01:48:57.938447 1744 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:57.948926 kubelet[1744]: E1213 01:48:57.948886 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:58.949837 kubelet[1744]: E1213 01:48:58.949784 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:48:59.950479 kubelet[1744]: E1213 01:48:59.950417 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:00.951029 kubelet[1744]: E1213 01:49:00.950978 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:01.952092 kubelet[1744]: E1213 01:49:01.952048 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:02.953011 kubelet[1744]: E1213 01:49:02.952962 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:03.953801 kubelet[1744]: E1213 01:49:03.953749 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:03.997058 containerd[1444]: time="2024-12-13T01:49:03.996915157Z" level=info msg="StopPodSandbox for \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\"" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.039 [INFO][2705] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.039 [INFO][2705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" iface="eth0" netns="/var/run/netns/cni-88ebd3a5-d1b8-6083-a081-5dc8a0573450" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.039 [INFO][2705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" iface="eth0" netns="/var/run/netns/cni-88ebd3a5-d1b8-6083-a081-5dc8a0573450" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.039 [INFO][2705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" iface="eth0" netns="/var/run/netns/cni-88ebd3a5-d1b8-6083-a081-5dc8a0573450" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.039 [INFO][2705] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.039 [INFO][2705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.090 [INFO][2712] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" HandleID="k8s-pod-network.52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Workload="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.090 [INFO][2712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.090 [INFO][2712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.099 [WARNING][2712] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" HandleID="k8s-pod-network.52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Workload="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.099 [INFO][2712] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" HandleID="k8s-pod-network.52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Workload="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.100 [INFO][2712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:04.106892 containerd[1444]: 2024-12-13 01:49:04.102 [INFO][2705] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7" Dec 13 01:49:04.108953 containerd[1444]: time="2024-12-13T01:49:04.107043427Z" level=info msg="TearDown network for sandbox \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\" successfully" Dec 13 01:49:04.108953 containerd[1444]: time="2024-12-13T01:49:04.107080267Z" level=info msg="StopPodSandbox for \"52a7dd37dd7a73d9bad6247af60ad5c2643333e0c9c0de72dd6240073ac57da7\" returns successfully" Dec 13 01:49:04.108953 containerd[1444]: time="2024-12-13T01:49:04.107762501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cxgk2,Uid:10877efe-9146-4b36-8bbb-f15ba78d288c,Namespace:calico-system,Attempt:1,}" Dec 13 01:49:04.109567 systemd[1]: run-netns-cni\x2d88ebd3a5\x2dd1b8\x2d6083\x2da081\x2d5dc8a0573450.mount: Deactivated successfully. Dec 13 01:49:04.213271 systemd-networkd[1392]: calie3e62ad1990: Link UP Dec 13 01:49:04.214544 systemd-networkd[1392]: calie3e62ad1990: Gained carrier Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.152 [INFO][2722] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.145-k8s-csi--node--driver--cxgk2-eth0 csi-node-driver- calico-system 10877efe-9146-4b36-8bbb-f15ba78d288c 1070 0 2024-12-13 01:48:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.145 csi-node-driver-cxgk2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie3e62ad1990 [] []}} ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Namespace="calico-system" Pod="csi-node-driver-cxgk2" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--cxgk2-" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.152 [INFO][2722] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Namespace="calico-system" Pod="csi-node-driver-cxgk2" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.174 [INFO][2734] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" HandleID="k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Workload="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.185 [INFO][2734] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" HandleID="k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Workload="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.145", "pod":"csi-node-driver-cxgk2", "timestamp":"2024-12-13 01:49:04.174751008 +0000 UTC"}, Hostname:"10.0.0.145", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.185 [INFO][2734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.185 [INFO][2734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.185 [INFO][2734] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.145' Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.187 [INFO][2734] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.191 [INFO][2734] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.195 [INFO][2734] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.197 [INFO][2734] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.199 [INFO][2734] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.199 [INFO][2734] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.200 [INFO][2734] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013 Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.205 [INFO][2734] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.209 [INFO][2734] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.1/26] block=192.168.31.0/26 handle="k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.209 [INFO][2734] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.1/26] handle="k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" host="10.0.0.145" Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.209 [INFO][2734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:04.227521 containerd[1444]: 2024-12-13 01:49:04.209 [INFO][2734] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.1/26] IPv6=[] ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" HandleID="k8s-pod-network.2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Workload="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.228182 containerd[1444]: 2024-12-13 01:49:04.211 [INFO][2722] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Namespace="calico-system" Pod="csi-node-driver-cxgk2" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-csi--node--driver--cxgk2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10877efe-9146-4b36-8bbb-f15ba78d288c", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"", Pod:"csi-node-driver-cxgk2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3e62ad1990", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:04.228182 containerd[1444]: 2024-12-13 01:49:04.211 [INFO][2722] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.1/32] ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Namespace="calico-system" Pod="csi-node-driver-cxgk2" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.228182 containerd[1444]: 2024-12-13 01:49:04.211 [INFO][2722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3e62ad1990 ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Namespace="calico-system" Pod="csi-node-driver-cxgk2" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.228182 containerd[1444]: 2024-12-13 01:49:04.213 [INFO][2722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Namespace="calico-system" Pod="csi-node-driver-cxgk2" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.228182 containerd[1444]: 2024-12-13 01:49:04.214 [INFO][2722] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Namespace="calico-system" Pod="csi-node-driver-cxgk2" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-csi--node--driver--cxgk2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"10877efe-9146-4b36-8bbb-f15ba78d288c", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013", Pod:"csi-node-driver-cxgk2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3e62ad1990", MAC:"9a:e5:cf:f2:59:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:04.228182 containerd[1444]: 2024-12-13 01:49:04.224 [INFO][2722] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013" Namespace="calico-system" Pod="csi-node-driver-cxgk2" WorkloadEndpoint="10.0.0.145-k8s-csi--node--driver--cxgk2-eth0" Dec 13 01:49:04.244680 containerd[1444]: time="2024-12-13T01:49:04.244580009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:04.244680 containerd[1444]: time="2024-12-13T01:49:04.244633048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:04.244680 containerd[1444]: time="2024-12-13T01:49:04.244664168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:04.244844 containerd[1444]: time="2024-12-13T01:49:04.244737167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:04.267876 systemd[1]: Started cri-containerd-2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013.scope - libcontainer container 2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013. Dec 13 01:49:04.276304 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:49:04.284849 containerd[1444]: time="2024-12-13T01:49:04.284796521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cxgk2,Uid:10877efe-9146-4b36-8bbb-f15ba78d288c,Namespace:calico-system,Attempt:1,} returns sandbox id \"2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013\"" Dec 13 01:49:04.286656 containerd[1444]: time="2024-12-13T01:49:04.286424426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:49:04.954942 kubelet[1744]: E1213 01:49:04.954891 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:04.997023 containerd[1444]: time="2024-12-13T01:49:04.996912644Z" level=info msg="StopPodSandbox for \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\"" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.035 [INFO][2815] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.036 [INFO][2815] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" iface="eth0" netns="/var/run/netns/cni-77296417-b27b-92ee-ea7c-d5d72111ae33" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.036 [INFO][2815] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" iface="eth0" netns="/var/run/netns/cni-77296417-b27b-92ee-ea7c-d5d72111ae33" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.036 [INFO][2815] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" iface="eth0" netns="/var/run/netns/cni-77296417-b27b-92ee-ea7c-d5d72111ae33" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.036 [INFO][2815] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.036 [INFO][2815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.052 [INFO][2823] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" HandleID="k8s-pod-network.eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Workload="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.053 [INFO][2823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.053 [INFO][2823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.061 [WARNING][2823] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" HandleID="k8s-pod-network.eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Workload="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.061 [INFO][2823] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" HandleID="k8s-pod-network.eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Workload="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.063 [INFO][2823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:05.066580 containerd[1444]: 2024-12-13 01:49:05.064 [INFO][2815] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f" Dec 13 01:49:05.067379 containerd[1444]: time="2024-12-13T01:49:05.067265918Z" level=info msg="TearDown network for sandbox \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\" successfully" Dec 13 01:49:05.067379 containerd[1444]: time="2024-12-13T01:49:05.067294678Z" level=info msg="StopPodSandbox for \"eab49808d4f49194ee903faf3d6dc6993bb31c181a52ed0799acc9c30a05e61f\" returns successfully" Dec 13 01:49:05.067827 containerd[1444]: time="2024-12-13T01:49:05.067765354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9nlj6,Uid:ff173915-6b58-4024-b15f-a31f4dae6816,Namespace:default,Attempt:1,}" Dec 13 01:49:05.110136 systemd[1]: run-netns-cni\x2d77296417\x2db27b\x2d92ee\x2dea7c\x2dd5d72111ae33.mount: Deactivated successfully. Dec 13 01:49:05.168784 systemd-networkd[1392]: cali168c0622222: Link UP Dec 13 01:49:05.169138 systemd-networkd[1392]: cali168c0622222: Gained carrier Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.106 [INFO][2830] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0 nginx-deployment-8587fbcb89- default ff173915-6b58-4024-b15f-a31f4dae6816 1076 0 2024-12-13 01:48:49 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.145 nginx-deployment-8587fbcb89-9nlj6 eth0 default [] [] [kns.default ksa.default.default] cali168c0622222 [] []}} ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Namespace="default" Pod="nginx-deployment-8587fbcb89-9nlj6" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.106 [INFO][2830] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Namespace="default" Pod="nginx-deployment-8587fbcb89-9nlj6" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.131 [INFO][2843] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" HandleID="k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Workload="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.142 [INFO][2843] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" HandleID="k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Workload="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000375590), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.145", "pod":"nginx-deployment-8587fbcb89-9nlj6", "timestamp":"2024-12-13 01:49:05.131456168 +0000 UTC"}, Hostname:"10.0.0.145", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.142 [INFO][2843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.142 [INFO][2843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.142 [INFO][2843] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.145' Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.144 [INFO][2843] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.148 [INFO][2843] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.152 [INFO][2843] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.153 [INFO][2843] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.155 [INFO][2843] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.155 [INFO][2843] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.157 [INFO][2843] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154 Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.161 [INFO][2843] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.165 [INFO][2843] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.2/26] block=192.168.31.0/26 handle="k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.165 [INFO][2843] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.2/26] handle="k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" host="10.0.0.145" Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.165 [INFO][2843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:05.180067 containerd[1444]: 2024-12-13 01:49:05.165 [INFO][2843] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.2/26] IPv6=[] ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" HandleID="k8s-pod-network.edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Workload="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.180569 containerd[1444]: 2024-12-13 01:49:05.167 [INFO][2830] cni-plugin/k8s.go 386: Populated endpoint ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Namespace="default" Pod="nginx-deployment-8587fbcb89-9nlj6" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"ff173915-6b58-4024-b15f-a31f4dae6816", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-9nlj6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali168c0622222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:05.180569 containerd[1444]: 2024-12-13 01:49:05.167 [INFO][2830] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.2/32] ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Namespace="default" Pod="nginx-deployment-8587fbcb89-9nlj6" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.180569 containerd[1444]: 2024-12-13 01:49:05.167 [INFO][2830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali168c0622222 ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Namespace="default" Pod="nginx-deployment-8587fbcb89-9nlj6" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.180569 containerd[1444]: 2024-12-13 01:49:05.169 [INFO][2830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Namespace="default" Pod="nginx-deployment-8587fbcb89-9nlj6" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.180569 containerd[1444]: 2024-12-13 01:49:05.169 [INFO][2830] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Namespace="default" Pod="nginx-deployment-8587fbcb89-9nlj6" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"ff173915-6b58-4024-b15f-a31f4dae6816", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154", Pod:"nginx-deployment-8587fbcb89-9nlj6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali168c0622222", MAC:"82:9d:64:3c:51:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:05.180569 containerd[1444]: 2024-12-13 01:49:05.175 [INFO][2830] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154" Namespace="default" Pod="nginx-deployment-8587fbcb89-9nlj6" WorkloadEndpoint="10.0.0.145-k8s-nginx--deployment--8587fbcb89--9nlj6-eth0" Dec 13 01:49:05.203178 containerd[1444]: time="2024-12-13T01:49:05.201914563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:05.203178 containerd[1444]: time="2024-12-13T01:49:05.203132553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:05.203178 containerd[1444]: time="2024-12-13T01:49:05.203151273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:05.203371 containerd[1444]: time="2024-12-13T01:49:05.203245272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:05.223921 systemd[1]: Started cri-containerd-edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154.scope - libcontainer container edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154. Dec 13 01:49:05.235160 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:49:05.252017 containerd[1444]: time="2024-12-13T01:49:05.251950294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9nlj6,Uid:ff173915-6b58-4024-b15f-a31f4dae6816,Namespace:default,Attempt:1,} returns sandbox id \"edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154\"" Dec 13 01:49:05.293034 containerd[1444]: time="2024-12-13T01:49:05.292987582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:05.293751 containerd[1444]: time="2024-12-13T01:49:05.293719976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:49:05.294744 containerd[1444]: time="2024-12-13T01:49:05.294466369Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:05.296710 containerd[1444]: time="2024-12-13T01:49:05.296658590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:05.297375 containerd[1444]: time="2024-12-13T01:49:05.297338424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.010880239s" Dec 13 01:49:05.297441 containerd[1444]: time="2024-12-13T01:49:05.297383184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:49:05.298657 containerd[1444]: time="2024-12-13T01:49:05.298600294Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:49:05.299310 containerd[1444]: time="2024-12-13T01:49:05.299284208Z" level=info msg="CreateContainer within sandbox \"2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:49:05.310332 containerd[1444]: time="2024-12-13T01:49:05.310284193Z" level=info msg="CreateContainer within sandbox \"2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"274069efc1056cf8db0f290452be09894801e9427e97e6e1c7f895b9276e1682\"" Dec 13 01:49:05.310707 containerd[1444]: time="2024-12-13T01:49:05.310685630Z" level=info msg="StartContainer for \"274069efc1056cf8db0f290452be09894801e9427e97e6e1c7f895b9276e1682\"" Dec 13 01:49:05.337814 systemd[1]: Started cri-containerd-274069efc1056cf8db0f290452be09894801e9427e97e6e1c7f895b9276e1682.scope - libcontainer container 274069efc1056cf8db0f290452be09894801e9427e97e6e1c7f895b9276e1682. Dec 13 01:49:05.366704 containerd[1444]: time="2024-12-13T01:49:05.365899676Z" level=info msg="StartContainer for \"274069efc1056cf8db0f290452be09894801e9427e97e6e1c7f895b9276e1682\" returns successfully" Dec 13 01:49:05.452867 systemd-networkd[1392]: calie3e62ad1990: Gained IPv6LL Dec 13 01:49:05.955632 kubelet[1744]: E1213 01:49:05.955583 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:06.861213 systemd-networkd[1392]: cali168c0622222: Gained IPv6LL Dec 13 01:49:06.956979 kubelet[1744]: E1213 01:49:06.956695 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:07.294405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713904918.mount: Deactivated successfully. Dec 13 01:49:07.957413 kubelet[1744]: E1213 01:49:07.957376 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:08.085098 containerd[1444]: time="2024-12-13T01:49:08.084436815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:08.087578 containerd[1444]: time="2024-12-13T01:49:08.087526553Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 01:49:08.088632 containerd[1444]: time="2024-12-13T01:49:08.088597786Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:08.092011 containerd[1444]: time="2024-12-13T01:49:08.091973122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:08.093195 containerd[1444]: time="2024-12-13T01:49:08.092962995Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 2.794333182s" Dec 13 01:49:08.093195 containerd[1444]: time="2024-12-13T01:49:08.092993675Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:49:08.094626 containerd[1444]: time="2024-12-13T01:49:08.094501144Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:49:08.095490 containerd[1444]: time="2024-12-13T01:49:08.095307858Z" level=info msg="CreateContainer within sandbox \"edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:49:08.106487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204447748.mount: Deactivated successfully. Dec 13 01:49:08.107665 containerd[1444]: time="2024-12-13T01:49:08.107556572Z" level=info msg="CreateContainer within sandbox \"edae9ed38afd407645b121c8bea0413c879ab3b42036a79029c5acfd3b483154\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"23364d66f62f597bef8b7ee9a7f2839af0fedfafc0fc25d5d25c98b2b8703e8a\"" Dec 13 01:49:08.108180 containerd[1444]: time="2024-12-13T01:49:08.108103528Z" level=info msg="StartContainer for \"23364d66f62f597bef8b7ee9a7f2839af0fedfafc0fc25d5d25c98b2b8703e8a\"" Dec 13 01:49:08.222837 systemd[1]: Started cri-containerd-23364d66f62f597bef8b7ee9a7f2839af0fedfafc0fc25d5d25c98b2b8703e8a.scope - libcontainer container 23364d66f62f597bef8b7ee9a7f2839af0fedfafc0fc25d5d25c98b2b8703e8a. Dec 13 01:49:08.248049 containerd[1444]: time="2024-12-13T01:49:08.248010979Z" level=info msg="StartContainer for \"23364d66f62f597bef8b7ee9a7f2839af0fedfafc0fc25d5d25c98b2b8703e8a\" returns successfully" Dec 13 01:49:08.957766 kubelet[1744]: E1213 01:49:08.957717 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:09.424029 containerd[1444]: time="2024-12-13T01:49:09.423981691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:09.424911 containerd[1444]: time="2024-12-13T01:49:09.424761606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:49:09.425713 containerd[1444]: time="2024-12-13T01:49:09.425632760Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:09.428003 containerd[1444]: time="2024-12-13T01:49:09.427815746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:09.428562 containerd[1444]: time="2024-12-13T01:49:09.428535341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.333997397s" Dec 13 01:49:09.428619 containerd[1444]: time="2024-12-13T01:49:09.428566941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:49:09.430345 containerd[1444]: time="2024-12-13T01:49:09.430307489Z" level=info msg="CreateContainer within sandbox \"2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:49:09.442194 containerd[1444]: time="2024-12-13T01:49:09.442141131Z" level=info msg="CreateContainer within sandbox \"2246f14c38582415f945aac63f456f6c0636a65658866cecc9ab497e24c2b013\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6fb70315aaff3ac9f4728d8bf355a963a1c5bb6342c7f20c699c06e0e6bab41f\"" Dec 13 01:49:09.444249 containerd[1444]: time="2024-12-13T01:49:09.442840286Z" level=info msg="StartContainer for \"6fb70315aaff3ac9f4728d8bf355a963a1c5bb6342c7f20c699c06e0e6bab41f\"" Dec 13 01:49:09.479846 systemd[1]: Started cri-containerd-6fb70315aaff3ac9f4728d8bf355a963a1c5bb6342c7f20c699c06e0e6bab41f.scope - libcontainer container 6fb70315aaff3ac9f4728d8bf355a963a1c5bb6342c7f20c699c06e0e6bab41f. Dec 13 01:49:09.532724 containerd[1444]: time="2024-12-13T01:49:09.532681411Z" level=info msg="StartContainer for \"6fb70315aaff3ac9f4728d8bf355a963a1c5bb6342c7f20c699c06e0e6bab41f\" returns successfully" Dec 13 01:49:09.958485 kubelet[1744]: E1213 01:49:09.958429 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:10.026184 kubelet[1744]: I1213 01:49:10.026139 1744 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:49:10.026184 kubelet[1744]: I1213 01:49:10.026180 1744 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:49:10.090530 kubelet[1744]: I1213 01:49:10.090466 1744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-9nlj6" podStartSLOduration=18.249988325 podStartE2EDuration="21.090449912s" podCreationTimestamp="2024-12-13 01:48:49 +0000 UTC" firstStartedPulling="2024-12-13 01:49:05.25354276 +0000 UTC m=+29.478623547" lastFinishedPulling="2024-12-13 01:49:08.094004347 +0000 UTC m=+32.319085134" observedRunningTime="2024-12-13 01:49:09.085218736 +0000 UTC m=+33.310299603" watchObservedRunningTime="2024-12-13 01:49:10.090449912 +0000 UTC m=+34.315530699" Dec 13 01:49:10.958875 kubelet[1744]: E1213 01:49:10.958828 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:11.959718 kubelet[1744]: E1213 01:49:11.959670 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:12.567279 kubelet[1744]: I1213 01:49:12.566566 1744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cxgk2" podStartSLOduration=29.423582111 podStartE2EDuration="34.5665483s" podCreationTimestamp="2024-12-13 01:48:38 +0000 UTC" firstStartedPulling="2024-12-13 01:49:04.286144828 +0000 UTC m=+28.511225615" lastFinishedPulling="2024-12-13 01:49:09.429111017 +0000 UTC m=+33.654191804" observedRunningTime="2024-12-13 01:49:10.09072631 +0000 UTC m=+34.315807097" watchObservedRunningTime="2024-12-13 01:49:12.5665483 +0000 UTC m=+36.791629087" Dec 13 01:49:12.573366 systemd[1]: Created slice kubepods-besteffort-poda1340fab_b008_45ba_82d0_407ba1d47bd4.slice - libcontainer container kubepods-besteffort-poda1340fab_b008_45ba_82d0_407ba1d47bd4.slice. Dec 13 01:49:12.623468 update_engine[1432]: I20241213 01:49:12.623392 1432 update_attempter.cc:509] Updating boot flags... Dec 13 01:49:12.655451 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (3096) Dec 13 01:49:12.688688 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (3098) Dec 13 01:49:12.713598 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (3098) Dec 13 01:49:12.724148 kubelet[1744]: I1213 01:49:12.724118 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a1340fab-b008-45ba-82d0-407ba1d47bd4-data\") pod \"nfs-server-provisioner-0\" (UID: \"a1340fab-b008-45ba-82d0-407ba1d47bd4\") " pod="default/nfs-server-provisioner-0" Dec 13 01:49:12.724288 kubelet[1744]: I1213 01:49:12.724271 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk4n9\" (UniqueName: \"kubernetes.io/projected/a1340fab-b008-45ba-82d0-407ba1d47bd4-kube-api-access-wk4n9\") pod \"nfs-server-provisioner-0\" (UID: \"a1340fab-b008-45ba-82d0-407ba1d47bd4\") " pod="default/nfs-server-provisioner-0" Dec 13 01:49:12.882804 containerd[1444]: time="2024-12-13T01:49:12.882685654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a1340fab-b008-45ba-82d0-407ba1d47bd4,Namespace:default,Attempt:0,}" Dec 13 01:49:12.960029 kubelet[1744]: E1213 01:49:12.959989 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:12.982242 systemd-networkd[1392]: cali60e51b789ff: Link UP Dec 13 01:49:12.983028 systemd-networkd[1392]: cali60e51b789ff: Gained carrier Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.921 [INFO][3105] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.145-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a1340fab-b008-45ba-82d0-407ba1d47bd4 1156 0 2024-12-13 01:49:12 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.145 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.921 [INFO][3105] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.944 [INFO][3120] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" HandleID="k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Workload="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.954 [INFO][3120] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" HandleID="k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Workload="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004e1120), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.145", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 01:49:12.944132398 +0000 UTC"}, Hostname:"10.0.0.145", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.954 [INFO][3120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.954 [INFO][3120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.954 [INFO][3120] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.145' Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.956 [INFO][3120] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.961 [INFO][3120] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.965 [INFO][3120] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.966 [INFO][3120] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.969 [INFO][3120] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.969 [INFO][3120] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.970 [INFO][3120] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.973 [INFO][3120] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.978 [INFO][3120] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.3/26] block=192.168.31.0/26 handle="k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.978 [INFO][3120] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.3/26] handle="k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" host="10.0.0.145" Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.978 [INFO][3120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:12.994715 containerd[1444]: 2024-12-13 01:49:12.978 [INFO][3120] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.3/26] IPv6=[] ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" HandleID="k8s-pod-network.74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Workload="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:49:12.996709 containerd[1444]: 2024-12-13 01:49:12.980 [INFO][3105] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a1340fab-b008-45ba-82d0-407ba1d47bd4", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:12.996709 containerd[1444]: 2024-12-13 01:49:12.980 [INFO][3105] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.3/32] ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:49:12.996709 containerd[1444]: 2024-12-13 01:49:12.980 [INFO][3105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:49:12.996709 containerd[1444]: 2024-12-13 01:49:12.983 [INFO][3105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:49:12.996964 containerd[1444]: 2024-12-13 01:49:12.983 [INFO][3105] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a1340fab-b008-45ba-82d0-407ba1d47bd4", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.31.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"06:76:11:9b:9c:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:12.996964 containerd[1444]: 2024-12-13 01:49:12.990 [INFO][3105] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.145-k8s-nfs--server--provisioner--0-eth0" Dec 13 01:49:13.012540 containerd[1444]: time="2024-12-13T01:49:13.012443069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:13.012540 containerd[1444]: time="2024-12-13T01:49:13.012504389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:13.012540 containerd[1444]: time="2024-12-13T01:49:13.012525749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:13.012741 containerd[1444]: time="2024-12-13T01:49:13.012619548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:13.039787 systemd[1]: Started cri-containerd-74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df.scope - libcontainer container 74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df. Dec 13 01:49:13.049592 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:49:13.102581 containerd[1444]: time="2024-12-13T01:49:13.102546888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a1340fab-b008-45ba-82d0-407ba1d47bd4,Namespace:default,Attempt:0,} returns sandbox id \"74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df\"" Dec 13 01:49:13.104085 containerd[1444]: time="2024-12-13T01:49:13.104035880Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:49:13.961292 kubelet[1744]: E1213 01:49:13.960446 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:14.092770 systemd-networkd[1392]: cali60e51b789ff: Gained IPv6LL Dec 13 01:49:14.961406 kubelet[1744]: E1213 01:49:14.961364 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:15.074236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611423938.mount: Deactivated successfully. Dec 13 01:49:15.962040 kubelet[1744]: E1213 01:49:15.962000 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:16.403109 containerd[1444]: time="2024-12-13T01:49:16.403041294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:16.403634 containerd[1444]: time="2024-12-13T01:49:16.403597972Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Dec 13 01:49:16.404444 containerd[1444]: time="2024-12-13T01:49:16.404399208Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:16.407056 containerd[1444]: time="2024-12-13T01:49:16.407019237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:16.408850 containerd[1444]: time="2024-12-13T01:49:16.408139193Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.304070433s" Dec 13 01:49:16.408850 containerd[1444]: time="2024-12-13T01:49:16.408172312Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 01:49:16.410740 containerd[1444]: time="2024-12-13T01:49:16.410710622Z" level=info msg="CreateContainer within sandbox \"74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:49:16.420955 containerd[1444]: time="2024-12-13T01:49:16.420914859Z" level=info msg="CreateContainer within sandbox \"74aa6869c6626200f931610bb41a96da200416178a349dcae413bdad20a613df\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a5d78f6281ff1ac271adef550838a1f4472071beb9574ad18e8e6a8f36a024dd\"" Dec 13 01:49:16.421678 containerd[1444]: time="2024-12-13T01:49:16.421311897Z" level=info msg="StartContainer for \"a5d78f6281ff1ac271adef550838a1f4472071beb9574ad18e8e6a8f36a024dd\"" Dec 13 01:49:16.450783 systemd[1]: Started cri-containerd-a5d78f6281ff1ac271adef550838a1f4472071beb9574ad18e8e6a8f36a024dd.scope - libcontainer container a5d78f6281ff1ac271adef550838a1f4472071beb9574ad18e8e6a8f36a024dd. Dec 13 01:49:16.476947 containerd[1444]: time="2024-12-13T01:49:16.476905583Z" level=info msg="StartContainer for \"a5d78f6281ff1ac271adef550838a1f4472071beb9574ad18e8e6a8f36a024dd\" returns successfully" Dec 13 01:49:16.963340 kubelet[1744]: E1213 01:49:16.963295 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:17.113464 kubelet[1744]: I1213 01:49:17.113318 1744 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.8074993419999998 podStartE2EDuration="5.113301527s" podCreationTimestamp="2024-12-13 01:49:12 +0000 UTC" firstStartedPulling="2024-12-13 01:49:13.103773722 +0000 UTC m=+37.328854469" lastFinishedPulling="2024-12-13 01:49:16.409575867 +0000 UTC m=+40.634656654" observedRunningTime="2024-12-13 01:49:17.113027808 +0000 UTC m=+41.338108555" watchObservedRunningTime="2024-12-13 01:49:17.113301527 +0000 UTC m=+41.338382314" Dec 13 01:49:17.938863 kubelet[1744]: E1213 01:49:17.938822 1744 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:17.964426 kubelet[1744]: E1213 01:49:17.964385 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:18.964853 kubelet[1744]: E1213 01:49:18.964809 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:19.965655 kubelet[1744]: E1213 01:49:19.965585 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:20.966663 kubelet[1744]: E1213 01:49:20.966592 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:21.967769 kubelet[1744]: E1213 01:49:21.967721 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:22.968022 kubelet[1744]: E1213 01:49:22.967972 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:23.968717 kubelet[1744]: E1213 01:49:23.968673 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:24.349980 kubelet[1744]: E1213 01:49:24.349952 1744 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:49:24.968884 kubelet[1744]: E1213 01:49:24.968825 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:25.969440 kubelet[1744]: E1213 01:49:25.969393 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:26.548231 systemd[1]: Created slice kubepods-besteffort-pod03dc8954_8c74_48c9_981d_8c6cf68e5ca7.slice - libcontainer container kubepods-besteffort-pod03dc8954_8c74_48c9_981d_8c6cf68e5ca7.slice. Dec 13 01:49:26.687705 kubelet[1744]: I1213 01:49:26.687474 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xx7f\" (UniqueName: \"kubernetes.io/projected/03dc8954-8c74-48c9-981d-8c6cf68e5ca7-kube-api-access-5xx7f\") pod \"test-pod-1\" (UID: \"03dc8954-8c74-48c9-981d-8c6cf68e5ca7\") " pod="default/test-pod-1" Dec 13 01:49:26.687705 kubelet[1744]: I1213 01:49:26.687517 1744 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89a1a9cf-4eb3-4ba0-84fb-df790bb7f91c\" (UniqueName: \"kubernetes.io/nfs/03dc8954-8c74-48c9-981d-8c6cf68e5ca7-pvc-89a1a9cf-4eb3-4ba0-84fb-df790bb7f91c\") pod \"test-pod-1\" (UID: \"03dc8954-8c74-48c9-981d-8c6cf68e5ca7\") " pod="default/test-pod-1" Dec 13 01:49:26.806679 kernel: FS-Cache: Loaded Dec 13 01:49:26.831947 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:49:26.832048 kernel: RPC: Registered udp transport module. Dec 13 01:49:26.832072 kernel: RPC: Registered tcp transport module. Dec 13 01:49:26.832818 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:49:26.832865 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:49:26.970187 kubelet[1744]: E1213 01:49:26.970133 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:27.008911 kernel: NFS: Registering the id_resolver key type Dec 13 01:49:27.009012 kernel: Key type id_resolver registered Dec 13 01:49:27.009036 kernel: Key type id_legacy registered Dec 13 01:49:27.041360 nfsidmap[3329]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:49:27.044897 nfsidmap[3332]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:49:27.152011 containerd[1444]: time="2024-12-13T01:49:27.151671727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:03dc8954-8c74-48c9-981d-8c6cf68e5ca7,Namespace:default,Attempt:0,}" Dec 13 01:49:27.282139 systemd-networkd[1392]: cali5ec59c6bf6e: Link UP Dec 13 01:49:27.282496 systemd-networkd[1392]: cali5ec59c6bf6e: Gained carrier Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.192 [INFO][3335] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.145-k8s-test--pod--1-eth0 default 03dc8954-8c74-48c9-981d-8c6cf68e5ca7 1222 0 2024-12-13 01:49:12 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.145 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.192 [INFO][3335] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.218 [INFO][3349] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" HandleID="k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Workload="10.0.0.145-k8s-test--pod--1-eth0" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.230 [INFO][3349] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" HandleID="k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Workload="10.0.0.145-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b940), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.145", "pod":"test-pod-1", "timestamp":"2024-12-13 01:49:27.218486868 +0000 UTC"}, Hostname:"10.0.0.145", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.231 [INFO][3349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.231 [INFO][3349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.231 [INFO][3349] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.145' Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.233 [INFO][3349] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.237 [INFO][3349] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.242 [INFO][3349] ipam/ipam.go 489: Trying affinity for 192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.243 [INFO][3349] ipam/ipam.go 155: Attempting to load block cidr=192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.246 [INFO][3349] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.31.0/26 host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.246 [INFO][3349] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.31.0/26 handle="k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.247 [INFO][3349] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678 Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.257 [INFO][3349] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.31.0/26 handle="k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.276 [INFO][3349] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.31.4/26] block=192.168.31.0/26 handle="k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.277 [INFO][3349] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.31.4/26] handle="k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" host="10.0.0.145" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.277 [INFO][3349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.277 [INFO][3349] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.31.4/26] IPv6=[] ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" HandleID="k8s-pod-network.2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Workload="10.0.0.145-k8s-test--pod--1-eth0" Dec 13 01:49:27.294332 containerd[1444]: 2024-12-13 01:49:27.278 [INFO][3335] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"03dc8954-8c74-48c9-981d-8c6cf68e5ca7", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:27.295397 containerd[1444]: 2024-12-13 01:49:27.279 [INFO][3335] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.31.4/32] ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Dec 13 01:49:27.295397 containerd[1444]: 2024-12-13 01:49:27.279 [INFO][3335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Dec 13 01:49:27.295397 containerd[1444]: 2024-12-13 01:49:27.281 [INFO][3335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Dec 13 01:49:27.295397 containerd[1444]: 2024-12-13 01:49:27.282 [INFO][3335] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.145-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"03dc8954-8c74-48c9-981d-8c6cf68e5ca7", ResourceVersion:"1222", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.145", ContainerID:"2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.31.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"3a:90:fd:e4:79:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:49:27.295397 containerd[1444]: 2024-12-13 01:49:27.292 [INFO][3335] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.145-k8s-test--pod--1-eth0" Dec 13 01:49:27.329373 containerd[1444]: time="2024-12-13T01:49:27.329250799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:49:27.329373 containerd[1444]: time="2024-12-13T01:49:27.329331879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:49:27.329373 containerd[1444]: time="2024-12-13T01:49:27.329347719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:27.329542 containerd[1444]: time="2024-12-13T01:49:27.329432758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:49:27.345807 systemd[1]: Started cri-containerd-2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678.scope - libcontainer container 2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678. Dec 13 01:49:27.356681 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:49:27.373398 containerd[1444]: time="2024-12-13T01:49:27.373292827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:03dc8954-8c74-48c9-981d-8c6cf68e5ca7,Namespace:default,Attempt:0,} returns sandbox id \"2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678\"" Dec 13 01:49:27.374778 containerd[1444]: time="2024-12-13T01:49:27.374739464Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:49:27.643688 containerd[1444]: time="2024-12-13T01:49:27.643310827Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:49:27.643818 containerd[1444]: time="2024-12-13T01:49:27.643738506Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:49:27.647537 containerd[1444]: time="2024-12-13T01:49:27.647498859Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 272.723155ms" Dec 13 01:49:27.647588 containerd[1444]: time="2024-12-13T01:49:27.647538499Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:49:27.649422 containerd[1444]: time="2024-12-13T01:49:27.649370375Z" level=info msg="CreateContainer within sandbox \"2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:49:27.659774 containerd[1444]: time="2024-12-13T01:49:27.659730393Z" level=info msg="CreateContainer within sandbox \"2dc59ed477633c289f20324b654d6b3ef641158b12b54181bb5b910cad6d8678\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"310949c0d5e3f114ed84f97e9ae3887f94e1322129a9f096e2b2016418ce9176\"" Dec 13 01:49:27.660161 containerd[1444]: time="2024-12-13T01:49:27.660128312Z" level=info msg="StartContainer for \"310949c0d5e3f114ed84f97e9ae3887f94e1322129a9f096e2b2016418ce9176\"" Dec 13 01:49:27.686812 systemd[1]: Started cri-containerd-310949c0d5e3f114ed84f97e9ae3887f94e1322129a9f096e2b2016418ce9176.scope - libcontainer container 310949c0d5e3f114ed84f97e9ae3887f94e1322129a9f096e2b2016418ce9176. Dec 13 01:49:27.708379 containerd[1444]: time="2024-12-13T01:49:27.708337013Z" level=info msg="StartContainer for \"310949c0d5e3f114ed84f97e9ae3887f94e1322129a9f096e2b2016418ce9176\" returns successfully" Dec 13 01:49:27.971250 kubelet[1744]: E1213 01:49:27.971132 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:28.971957 kubelet[1744]: E1213 01:49:28.971905 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:49:29.324864 systemd-networkd[1392]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 01:49:29.972692 kubelet[1744]: E1213 01:49:29.972627 1744 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"