Dec 13 14:10:10.898245 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:10:10.898272 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 14:10:10.898283 kernel: KASLR enabled Dec 13 14:10:10.898289 kernel: efi: EFI v2.7 by EDK II Dec 13 14:10:10.898294 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133c6b018 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132357218 Dec 13 14:10:10.898300 kernel: random: crng init done Dec 13 14:10:10.898308 kernel: secureboot: Secure boot disabled Dec 13 14:10:10.898315 kernel: ACPI: Early table checksum verification disabled Dec 13 14:10:10.898322 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 14:10:10.898328 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:10:10.898335 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898341 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898347 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898353 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898360 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898368 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898374 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898380 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898387 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:10:10.898393 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 14:10:10.898399 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 14:10:10.898405 kernel: NUMA: Failed to initialise from firmware Dec 13 14:10:10.898412 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 14:10:10.898419 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Dec 13 14:10:10.898425 kernel: Zone ranges: Dec 13 14:10:10.898431 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 14:10:10.898439 kernel: DMA32 empty Dec 13 14:10:10.898446 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 14:10:10.898452 kernel: Movable zone start for each node Dec 13 14:10:10.898459 kernel: Early memory node ranges Dec 13 14:10:10.898465 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 14:10:10.898471 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 14:10:10.898478 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 14:10:10.898484 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 14:10:10.898490 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 14:10:10.898496 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 14:10:10.898502 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 14:10:10.898510 kernel: psci: probing for conduit method from ACPI. Dec 13 14:10:10.898517 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:10:10.898523 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:10:10.898532 kernel: psci: Trusted OS migration not required Dec 13 14:10:10.898538 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:10:10.898545 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 14:10:10.898553 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 14:10:10.898560 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 14:10:10.898567 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:10:10.898573 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:10:10.898580 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:10:10.898587 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:10:10.898593 kernel: CPU features: detected: Spectre-v4 Dec 13 14:10:10.898600 kernel: CPU features: detected: Spectre-BHB Dec 13 14:10:10.898606 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:10:10.898613 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:10:10.898620 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:10:10.898628 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:10:10.898634 kernel: alternatives: applying boot alternatives Dec 13 14:10:10.898643 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 14:10:10.898651 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:10:10.898658 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:10:10.898665 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:10:10.898672 kernel: Fallback order for Node 0: 0 Dec 13 14:10:10.898680 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 14:10:10.898686 kernel: Policy zone: Normal Dec 13 14:10:10.898693 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:10:10.898732 kernel: software IO TLB: area num 2. Dec 13 14:10:10.898742 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 14:10:10.898750 kernel: Memory: 3881016K/4096000K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 214984K reserved, 0K cma-reserved) Dec 13 14:10:10.898757 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:10:10.898763 kernel: trace event string verifier disabled Dec 13 14:10:10.898770 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:10:10.898777 kernel: rcu: RCU event tracing is enabled. Dec 13 14:10:10.898785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:10:10.898793 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:10:10.898799 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:10:10.898806 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:10:10.898813 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:10:10.898821 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:10:10.898828 kernel: GICv3: 256 SPIs implemented Dec 13 14:10:10.898834 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:10:10.898841 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:10:10.898848 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 14:10:10.898855 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 14:10:10.898862 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 14:10:10.898869 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:10:10.898876 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:10:10.898882 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 14:10:10.898889 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 14:10:10.898896 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 14:10:10.898904 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:10:10.898910 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:10:10.898917 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:10:10.898924 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:10:10.898930 kernel: Console: colour dummy device 80x25 Dec 13 14:10:10.898937 kernel: ACPI: Core revision 20230628 Dec 13 14:10:10.898945 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:10:10.898951 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:10:10.898958 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 14:10:10.898965 kernel: landlock: Up and running. Dec 13 14:10:10.898973 kernel: SELinux: Initializing. Dec 13 14:10:10.898980 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:10:10.898987 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:10:10.898994 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 14:10:10.899002 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 14:10:10.899008 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:10:10.899016 kernel: rcu: Max phase no-delay instances is 400. Dec 13 14:10:10.899023 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 14:10:10.899031 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 14:10:10.899041 kernel: Remapping and enabling EFI services. Dec 13 14:10:10.899048 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:10:10.899056 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:10:10.899064 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 14:10:10.899072 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 14:10:10.899079 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:10:10.899086 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:10:10.899092 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:10:10.899099 kernel: SMP: Total of 2 processors activated. Dec 13 14:10:10.899108 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:10:10.899114 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:10:10.899126 kernel: CPU features: detected: Common not Private translations Dec 13 14:10:10.899134 kernel: CPU features: detected: CRC32 instructions Dec 13 14:10:10.899141 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 14:10:10.899148 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:10:10.899155 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:10:10.899163 kernel: CPU features: detected: Privileged Access Never Dec 13 14:10:10.899183 kernel: CPU features: detected: RAS Extension Support Dec 13 14:10:10.899192 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:10:10.899200 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:10:10.899207 kernel: alternatives: applying system-wide alternatives Dec 13 14:10:10.899214 kernel: devtmpfs: initialized Dec 13 14:10:10.899222 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:10:10.899229 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:10:10.899236 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:10:10.899243 kernel: SMBIOS 3.0.0 present. Dec 13 14:10:10.899252 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 14:10:10.899259 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:10:10.899267 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:10:10.899274 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:10:10.899282 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:10:10.899289 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:10:10.899296 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Dec 13 14:10:10.899304 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:10:10.899312 kernel: cpuidle: using governor menu Dec 13 14:10:10.899320 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:10:10.899327 kernel: ASID allocator initialised with 32768 entries Dec 13 14:10:10.899335 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:10:10.899342 kernel: Serial: AMBA PL011 UART driver Dec 13 14:10:10.899350 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 14:10:10.899357 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 14:10:10.899364 kernel: Modules: 508880 pages in range for PLT usage Dec 13 14:10:10.899371 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:10:10.899380 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 14:10:10.899388 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:10:10.899395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 14:10:10.899402 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:10:10.899409 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 14:10:10.899417 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:10:10.899424 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 14:10:10.899431 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:10:10.899438 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:10:10.899447 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:10:10.899454 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:10:10.899461 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:10:10.899468 kernel: ACPI: Interpreter enabled Dec 13 14:10:10.899476 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:10:10.899483 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:10:10.899490 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:10:10.899497 kernel: printk: console [ttyAMA0] enabled Dec 13 14:10:10.899504 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:10:10.899673 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:10:10.899782 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:10:10.899857 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:10:10.899926 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 14:10:10.899996 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 14:10:10.900005 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 14:10:10.900013 kernel: PCI host bridge to bus 0000:00 Dec 13 14:10:10.900092 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 14:10:10.900162 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:10:10.900288 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 14:10:10.900355 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:10:10.900443 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 14:10:10.900524 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 14:10:10.900595 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 14:10:10.900671 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 14:10:10.900769 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.900845 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 14:10:10.900925 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.900996 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 14:10:10.901075 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.901151 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 14:10:10.901257 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.901333 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 14:10:10.901413 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.901484 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 14:10:10.901563 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.901642 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 14:10:10.901734 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.901808 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 14:10:10.901885 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.901957 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 14:10:10.902037 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 14:10:10.902114 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 14:10:10.902209 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 14:10:10.902282 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 14:10:10.902363 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 14:10:10.902438 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 14:10:10.902513 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:10:10.902588 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 14:10:10.902677 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 14:10:10.902785 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 14:10:10.902877 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 14:10:10.902954 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 14:10:10.903029 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 14:10:10.903115 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 14:10:10.903249 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 14:10:10.903338 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 14:10:10.903413 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 14:10:10.903495 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 14:10:10.903569 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 14:10:10.903643 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 14:10:10.903841 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 14:10:10.903928 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 14:10:10.904004 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 14:10:10.904078 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 14:10:10.904155 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 14:10:10.904257 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 14:10:10.904333 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 14:10:10.904415 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 14:10:10.904490 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 14:10:10.904561 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 14:10:10.904638 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 14:10:10.904848 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 14:10:10.904927 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 14:10:10.905000 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 14:10:10.905076 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 14:10:10.905147 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 14:10:10.905266 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 14:10:10.905341 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 14:10:10.905419 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 14:10:10.905492 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 14:10:10.905563 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 14:10:10.905637 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 14:10:10.905735 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 14:10:10.905831 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 14:10:10.905904 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 14:10:10.905978 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 14:10:10.906047 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 14:10:10.906120 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 14:10:10.906206 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 14:10:10.906285 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 14:10:10.906356 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 14:10:10.906432 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 14:10:10.906501 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 14:10:10.906573 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 14:10:10.906645 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 14:10:10.906757 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 14:10:10.906840 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 14:10:10.906914 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 14:10:10.906982 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 14:10:10.907052 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 14:10:10.907123 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 14:10:10.907209 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 14:10:10.907282 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 14:10:10.907376 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 14:10:10.907457 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 14:10:10.907536 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 14:10:10.907609 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 14:10:10.907729 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 14:10:10.907814 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 14:10:10.907897 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 14:10:10.907973 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 14:10:10.908044 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 14:10:10.908115 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 14:10:10.908237 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 14:10:10.908312 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 14:10:10.908385 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 14:10:10.908456 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 14:10:10.908528 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 14:10:10.908606 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 14:10:10.908677 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 14:10:10.908783 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 14:10:10.908862 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 14:10:10.908934 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 14:10:10.909018 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 14:10:10.909119 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 14:10:10.909206 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 14:10:10.909285 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 14:10:10.909360 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 14:10:10.909429 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 14:10:10.909505 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 14:10:10.909585 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 14:10:10.909661 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:10:10.909787 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 14:10:10.909874 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 14:10:10.909969 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 14:10:10.910044 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 14:10:10.910112 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 14:10:10.910205 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 14:10:10.910281 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 14:10:10.910356 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 14:10:10.910427 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 14:10:10.910499 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 14:10:10.910577 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 14:10:10.910652 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 14:10:10.910750 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 14:10:10.910824 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 14:10:10.910900 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 14:10:10.910971 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 14:10:10.911049 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 14:10:10.911121 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 14:10:10.911201 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 14:10:10.911274 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 14:10:10.911345 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 14:10:10.911422 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 14:10:10.911497 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 14:10:10.911567 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 14:10:10.911637 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 14:10:10.911802 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 14:10:10.911894 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 14:10:10.911970 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 14:10:10.912041 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 14:10:10.912108 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 14:10:10.912228 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 14:10:10.912306 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 14:10:10.912382 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 14:10:10.912456 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 14:10:10.912528 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 14:10:10.912602 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 14:10:10.912680 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 14:10:10.912787 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 14:10:10.912860 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 14:10:10.912932 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 14:10:10.913001 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 14:10:10.913069 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 14:10:10.913138 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 14:10:10.913226 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 14:10:10.913297 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 14:10:10.913370 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 14:10:10.913437 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 14:10:10.913510 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 14:10:10.913572 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:10:10.913635 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 14:10:10.914824 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 14:10:10.914916 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 14:10:10.914984 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 14:10:10.915056 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 14:10:10.915119 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 14:10:10.915204 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 14:10:10.915283 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 14:10:10.915346 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 14:10:10.915410 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 14:10:10.915493 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 14:10:10.915558 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 14:10:10.915636 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 14:10:10.916872 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 14:10:10.916974 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 14:10:10.917038 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 14:10:10.917116 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 14:10:10.917359 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 14:10:10.917435 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 14:10:10.917510 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 14:10:10.917580 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 14:10:10.917647 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 14:10:10.918828 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 14:10:10.918917 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 14:10:10.918982 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 14:10:10.919056 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 14:10:10.919122 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 14:10:10.919250 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 14:10:10.919263 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:10:10.919271 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:10:10.919279 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:10:10.919287 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:10:10.919294 kernel: iommu: Default domain type: Translated Dec 13 14:10:10.919302 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:10:10.919310 kernel: efivars: Registered efivars operations Dec 13 14:10:10.919317 kernel: vgaarb: loaded Dec 13 14:10:10.919329 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:10:10.919336 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:10:10.919345 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:10:10.919352 kernel: pnp: PnP ACPI init Dec 13 14:10:10.919434 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 14:10:10.919446 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:10:10.919454 kernel: NET: Registered PF_INET protocol family Dec 13 14:10:10.919462 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:10:10.919471 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:10:10.919479 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:10:10.919487 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:10:10.919494 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 14:10:10.919501 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:10:10.919509 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:10:10.919516 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:10:10.919524 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:10:10.919604 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 14:10:10.919618 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:10:10.919625 kernel: kvm [1]: HYP mode not available Dec 13 14:10:10.919633 kernel: Initialise system trusted keyrings Dec 13 14:10:10.919640 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:10:10.919648 kernel: Key type asymmetric registered Dec 13 14:10:10.919655 kernel: Asymmetric key parser 'x509' registered Dec 13 14:10:10.919664 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 14:10:10.919672 kernel: io scheduler mq-deadline registered Dec 13 14:10:10.919679 kernel: io scheduler kyber registered Dec 13 14:10:10.919688 kernel: io scheduler bfq registered Dec 13 14:10:10.920751 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 14:10:10.920901 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 14:10:10.920980 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 14:10:10.921055 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.921132 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 14:10:10.921228 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 14:10:10.921328 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.921408 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 14:10:10.921480 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 14:10:10.921551 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.921625 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 14:10:10.922948 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 14:10:10.923087 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.923182 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 14:10:10.923267 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 14:10:10.923343 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.923421 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 14:10:10.923493 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 14:10:10.923573 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.923649 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 14:10:10.925302 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 14:10:10.925394 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.925472 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 14:10:10.925546 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 14:10:10.925631 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.925642 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 14:10:10.925738 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 14:10:10.925819 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 14:10:10.925891 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 14:10:10.925903 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:10:10.925911 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:10:10.925923 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:10:10.926004 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 14:10:10.926085 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 14:10:10.926203 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 14:10:10.926218 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:10:10.926226 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 14:10:10.926311 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 14:10:10.926323 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 14:10:10.926336 kernel: thunder_xcv, ver 1.0 Dec 13 14:10:10.926343 kernel: thunder_bgx, ver 1.0 Dec 13 14:10:10.926351 kernel: nicpf, ver 1.0 Dec 13 14:10:10.926359 kernel: nicvf, ver 1.0 Dec 13 14:10:10.926458 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:10:10.926528 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:10:10 UTC (1734099010) Dec 13 14:10:10.926539 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:10:10.926548 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:10:10.926558 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 14:10:10.926566 kernel: watchdog: Hard watchdog permanently disabled Dec 13 14:10:10.926574 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:10:10.926582 kernel: Segment Routing with IPv6 Dec 13 14:10:10.926590 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:10:10.926597 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:10:10.926604 kernel: Key type dns_resolver registered Dec 13 14:10:10.926612 kernel: registered taskstats version 1 Dec 13 14:10:10.926620 kernel: Loading compiled-in X.509 certificates Dec 13 14:10:10.926630 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 14:10:10.926637 kernel: Key type .fscrypt registered Dec 13 14:10:10.926645 kernel: Key type fscrypt-provisioning registered Dec 13 14:10:10.926654 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:10:10.926662 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:10:10.926670 kernel: ima: No architecture policies found Dec 13 14:10:10.926678 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:10:10.926688 kernel: clk: Disabling unused clocks Dec 13 14:10:10.928276 kernel: Freeing unused kernel memory: 39936K Dec 13 14:10:10.928300 kernel: Run /init as init process Dec 13 14:10:10.928308 kernel: with arguments: Dec 13 14:10:10.928316 kernel: /init Dec 13 14:10:10.928323 kernel: with environment: Dec 13 14:10:10.928331 kernel: HOME=/ Dec 13 14:10:10.928338 kernel: TERM=linux Dec 13 14:10:10.928345 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:10:10.928355 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:10:10.928368 systemd[1]: Detected virtualization kvm. Dec 13 14:10:10.928376 systemd[1]: Detected architecture arm64. Dec 13 14:10:10.928383 systemd[1]: Running in initrd. Dec 13 14:10:10.928393 systemd[1]: No hostname configured, using default hostname. Dec 13 14:10:10.928401 systemd[1]: Hostname set to . Dec 13 14:10:10.928409 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:10:10.928417 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:10:10.928426 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:10:10.928436 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:10:10.928446 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 14:10:10.928456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:10:10.928464 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 14:10:10.928472 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 14:10:10.928481 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 14:10:10.928491 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 14:10:10.928499 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:10:10.928508 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:10:10.928516 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:10:10.928525 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:10:10.928533 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:10:10.928541 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:10:10.928550 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:10:10.928557 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:10:10.928567 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 14:10:10.928575 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 14:10:10.928583 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:10:10.928591 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:10:10.928600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:10:10.928608 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:10:10.928616 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 14:10:10.928624 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:10:10.928633 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 14:10:10.928642 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:10:10.928650 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:10:10.928658 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:10:10.928667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:10:10.928675 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 14:10:10.928741 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 14:10:10.928768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:10:10.928777 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:10:10.928786 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 14:10:10.928796 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:10:10.928805 systemd-journald[237]: Journal started Dec 13 14:10:10.928830 systemd-journald[237]: Runtime Journal (/run/log/journal/5b8b5bff283c4ee0af9cddd6c52ee8f9) is 8.0M, max 76.5M, 68.5M free. Dec 13 14:10:10.907877 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 14:10:10.934009 kernel: Bridge firewalling registered Dec 13 14:10:10.934064 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:10:10.934154 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 14:10:10.935566 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:10:10.938100 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:10:10.938887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:10:10.946098 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:10:10.959054 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:10:10.964969 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:10:10.967334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:10:10.973741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:10:10.984560 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:10:10.994453 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 14:10:10.996003 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:10:10.997562 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:10:11.009020 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:10:11.011001 dracut-cmdline[269]: dracut-dracut-053 Dec 13 14:10:11.014783 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 14:10:11.033334 systemd-resolved[278]: Positive Trust Anchors: Dec 13 14:10:11.033351 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:10:11.033387 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:10:11.044429 systemd-resolved[278]: Defaulting to hostname 'linux'. Dec 13 14:10:11.045495 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:10:11.046291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:10:11.099766 kernel: SCSI subsystem initialized Dec 13 14:10:11.104742 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:10:11.113743 kernel: iscsi: registered transport (tcp) Dec 13 14:10:11.126745 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:10:11.126828 kernel: QLogic iSCSI HBA Driver Dec 13 14:10:11.179523 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 14:10:11.188067 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 14:10:11.207028 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:10:11.207103 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:10:11.207120 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 14:10:11.259739 kernel: raid6: neonx8 gen() 15574 MB/s Dec 13 14:10:11.276777 kernel: raid6: neonx4 gen() 12263 MB/s Dec 13 14:10:11.293758 kernel: raid6: neonx2 gen() 13192 MB/s Dec 13 14:10:11.310789 kernel: raid6: neonx1 gen() 10464 MB/s Dec 13 14:10:11.327773 kernel: raid6: int64x8 gen() 6764 MB/s Dec 13 14:10:11.344763 kernel: raid6: int64x4 gen() 7319 MB/s Dec 13 14:10:11.361761 kernel: raid6: int64x2 gen() 6079 MB/s Dec 13 14:10:11.378850 kernel: raid6: int64x1 gen() 5030 MB/s Dec 13 14:10:11.378998 kernel: raid6: using algorithm neonx8 gen() 15574 MB/s Dec 13 14:10:11.395775 kernel: raid6: .... xor() 11868 MB/s, rmw enabled Dec 13 14:10:11.395861 kernel: raid6: using neon recovery algorithm Dec 13 14:10:11.400958 kernel: xor: measuring software checksum speed Dec 13 14:10:11.401017 kernel: 8regs : 21630 MB/sec Dec 13 14:10:11.401027 kernel: 32regs : 19092 MB/sec Dec 13 14:10:11.401036 kernel: arm64_neon : 27984 MB/sec Dec 13 14:10:11.401045 kernel: xor: using function: arm64_neon (27984 MB/sec) Dec 13 14:10:11.450773 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 14:10:11.464142 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:10:11.469929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:10:11.493248 systemd-udevd[456]: Using default interface naming scheme 'v255'. Dec 13 14:10:11.496835 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:10:11.507975 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 14:10:11.524222 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Dec 13 14:10:11.560549 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:10:11.566936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:10:11.616747 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:10:11.627864 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 14:10:11.641329 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 14:10:11.643631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:10:11.645056 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:10:11.646658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:10:11.652902 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 14:10:11.679797 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:10:11.755659 kernel: scsi host0: Virtio SCSI HBA Dec 13 14:10:11.756464 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:10:11.757246 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 14:10:11.770254 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:10:11.770833 kernel: ACPI: bus type USB registered Dec 13 14:10:11.770440 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:10:11.772679 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:10:11.775492 kernel: usbcore: registered new interface driver usbfs Dec 13 14:10:11.775514 kernel: usbcore: registered new interface driver hub Dec 13 14:10:11.775523 kernel: usbcore: registered new device driver usb Dec 13 14:10:11.774834 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:10:11.774983 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:10:11.776103 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:10:11.783135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:10:11.791121 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 14:10:11.795416 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 14:10:11.795535 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:10:11.795545 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:10:11.810989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:10:11.820889 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 14:10:11.832911 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 14:10:11.848572 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 14:10:11.848694 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 14:10:11.848837 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 14:10:11.848948 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 14:10:11.849045 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 14:10:11.849140 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 14:10:11.849254 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 14:10:11.849344 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 14:10:11.849429 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:10:11.849588 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 14:10:11.849679 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:10:11.849689 kernel: GPT:17805311 != 80003071 Dec 13 14:10:11.849718 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:10:11.849736 kernel: GPT:17805311 != 80003071 Dec 13 14:10:11.849745 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:10:11.849754 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:10:11.849762 kernel: hub 1-0:1.0: USB hub found Dec 13 14:10:11.849870 kernel: hub 1-0:1.0: 4 ports detected Dec 13 14:10:11.849965 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 14:10:11.850056 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 14:10:11.850200 kernel: hub 2-0:1.0: USB hub found Dec 13 14:10:11.850367 kernel: hub 2-0:1.0: 4 ports detected Dec 13 14:10:11.858037 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:10:11.898750 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (504) Dec 13 14:10:11.904741 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (527) Dec 13 14:10:11.911452 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 14:10:11.923214 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 14:10:11.930779 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 14:10:11.931420 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 14:10:11.937819 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 14:10:11.943958 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 14:10:11.953946 disk-uuid[573]: Primary Header is updated. Dec 13 14:10:11.953946 disk-uuid[573]: Secondary Entries is updated. Dec 13 14:10:11.953946 disk-uuid[573]: Secondary Header is updated. Dec 13 14:10:11.963736 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:10:11.968680 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:10:12.086758 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 14:10:12.330811 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 14:10:12.464218 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 14:10:12.464273 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 14:10:12.465712 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 14:10:12.518717 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 14:10:12.519336 kernel: usbcore: registered new interface driver usbhid Dec 13 14:10:12.519353 kernel: usbhid: USB HID core driver Dec 13 14:10:12.973784 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:10:12.975504 disk-uuid[574]: The operation has completed successfully. Dec 13 14:10:13.028305 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:10:13.028416 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 14:10:13.054003 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 14:10:13.061796 sh[589]: Success Dec 13 14:10:13.078833 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:10:13.141242 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 14:10:13.150838 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 14:10:13.155371 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 14:10:13.171179 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 14:10:13.171237 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:10:13.171248 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 14:10:13.171258 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 14:10:13.171267 kernel: BTRFS info (device dm-0): using free space tree Dec 13 14:10:13.178721 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 14:10:13.180971 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 14:10:13.182137 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 14:10:13.190916 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 14:10:13.193192 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 14:10:13.209424 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:10:13.209482 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:10:13.209493 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:10:13.212718 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:10:13.212789 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 14:10:13.224098 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:10:13.225357 kernel: BTRFS info (device sda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:10:13.229991 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 14:10:13.237095 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 14:10:13.338655 ignition[675]: Ignition 2.20.0 Dec 13 14:10:13.338670 ignition[675]: Stage: fetch-offline Dec 13 14:10:13.338724 ignition[675]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:10:13.338744 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:10:13.338903 ignition[675]: parsed url from cmdline: "" Dec 13 14:10:13.338907 ignition[675]: no config URL provided Dec 13 14:10:13.338912 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:10:13.341953 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:10:13.338920 ignition[675]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:10:13.342945 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:10:13.338925 ignition[675]: failed to fetch config: resource requires networking Dec 13 14:10:13.339211 ignition[675]: Ignition finished successfully Dec 13 14:10:13.351022 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:10:13.372812 systemd-networkd[777]: lo: Link UP Dec 13 14:10:13.372820 systemd-networkd[777]: lo: Gained carrier Dec 13 14:10:13.374561 systemd-networkd[777]: Enumeration completed Dec 13 14:10:13.375335 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:10:13.376447 systemd[1]: Reached target network.target - Network. Dec 13 14:10:13.376852 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:10:13.376857 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:10:13.377842 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:10:13.377846 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:10:13.378595 systemd-networkd[777]: eth0: Link UP Dec 13 14:10:13.378598 systemd-networkd[777]: eth0: Gained carrier Dec 13 14:10:13.378606 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:10:13.385098 systemd-networkd[777]: eth1: Link UP Dec 13 14:10:13.385103 systemd-networkd[777]: eth1: Gained carrier Dec 13 14:10:13.385117 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:10:13.388767 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 14:10:13.402881 ignition[779]: Ignition 2.20.0 Dec 13 14:10:13.403529 ignition[779]: Stage: fetch Dec 13 14:10:13.403755 ignition[779]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:10:13.403767 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:10:13.403870 ignition[779]: parsed url from cmdline: "" Dec 13 14:10:13.403873 ignition[779]: no config URL provided Dec 13 14:10:13.403878 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:10:13.403887 ignition[779]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:10:13.403975 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 14:10:13.404896 ignition[779]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 14:10:13.415846 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:10:13.442837 systemd-networkd[777]: eth0: DHCPv4 address 49.13.133.85/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 14:10:13.605090 ignition[779]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 14:10:13.612053 ignition[779]: GET result: OK Dec 13 14:10:13.612135 ignition[779]: parsing config with SHA512: ae7c5eb733379009e3acf1798b27e17db9d23626c28b02ace855c22d7f4fea5658771d6f28115617a807fa88f8385a21d85d20466dcac2f5d862f971fa077391 Dec 13 14:10:13.617178 unknown[779]: fetched base config from "system" Dec 13 14:10:13.617195 unknown[779]: fetched base config from "system" Dec 13 14:10:13.617754 ignition[779]: fetch: fetch complete Dec 13 14:10:13.617204 unknown[779]: fetched user config from "hetzner" Dec 13 14:10:13.617763 ignition[779]: fetch: fetch passed Dec 13 14:10:13.617832 ignition[779]: Ignition finished successfully Dec 13 14:10:13.619715 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 14:10:13.625971 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 14:10:13.640684 ignition[787]: Ignition 2.20.0 Dec 13 14:10:13.641315 ignition[787]: Stage: kargs Dec 13 14:10:13.641510 ignition[787]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:10:13.641529 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:10:13.642326 ignition[787]: kargs: kargs passed Dec 13 14:10:13.642380 ignition[787]: Ignition finished successfully Dec 13 14:10:13.645213 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 14:10:13.650897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 14:10:13.665602 ignition[793]: Ignition 2.20.0 Dec 13 14:10:13.665614 ignition[793]: Stage: disks Dec 13 14:10:13.665812 ignition[793]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:10:13.665823 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:10:13.666581 ignition[793]: disks: disks passed Dec 13 14:10:13.666626 ignition[793]: Ignition finished successfully Dec 13 14:10:13.668620 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 14:10:13.670081 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 14:10:13.671439 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 14:10:13.672680 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:10:13.673751 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:10:13.674694 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:10:13.686041 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 14:10:13.704514 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 14:10:13.710114 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 14:10:13.715833 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 14:10:13.774738 kernel: EXT4-fs (sda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 14:10:13.775232 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 14:10:13.776422 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 14:10:13.783851 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:10:13.786895 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 14:10:13.790911 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 14:10:13.792790 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:10:13.794004 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:10:13.802246 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (809) Dec 13 14:10:13.802298 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:10:13.802803 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:10:13.803707 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:10:13.809204 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:10:13.809270 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 14:10:13.811693 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 14:10:13.814263 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:10:13.822924 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 14:10:13.872086 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:10:13.872924 coreos-metadata[811]: Dec 13 14:10:13.872 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 14:10:13.876026 coreos-metadata[811]: Dec 13 14:10:13.875 INFO Fetch successful Dec 13 14:10:13.877993 coreos-metadata[811]: Dec 13 14:10:13.877 INFO wrote hostname ci-4186-0-0-5-82942f650b to /sysroot/etc/hostname Dec 13 14:10:13.879965 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:10:13.879265 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 14:10:13.886488 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:10:13.891046 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:10:13.994416 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 14:10:13.998901 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 14:10:14.002950 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 14:10:14.010767 kernel: BTRFS info (device sda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:10:14.032911 ignition[926]: INFO : Ignition 2.20.0 Dec 13 14:10:14.033618 ignition[926]: INFO : Stage: mount Dec 13 14:10:14.034290 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:10:14.035271 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:10:14.036397 ignition[926]: INFO : mount: mount passed Dec 13 14:10:14.038510 ignition[926]: INFO : Ignition finished successfully Dec 13 14:10:14.038421 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 14:10:14.040281 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 14:10:14.044945 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 14:10:14.170252 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 14:10:14.174923 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 14:10:14.196739 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (938) Dec 13 14:10:14.197965 kernel: BTRFS info (device sda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 14:10:14.198001 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:10:14.198021 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:10:14.201740 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:10:14.201796 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 14:10:14.205518 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 14:10:14.231670 ignition[955]: INFO : Ignition 2.20.0 Dec 13 14:10:14.231670 ignition[955]: INFO : Stage: files Dec 13 14:10:14.232853 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:10:14.232853 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:10:14.234469 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:10:14.234469 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:10:14.234469 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:10:14.238913 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:10:14.239726 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:10:14.239726 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:10:14.239642 unknown[955]: wrote ssh authorized keys file for user: core Dec 13 14:10:14.243169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:10:14.243169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:10:14.243169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:10:14.243169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:10:14.243169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:10:14.243169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:10:14.243169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:10:14.243169 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:10:14.595885 systemd-networkd[777]: eth0: Gained IPv6LL Dec 13 14:10:14.660100 systemd-networkd[777]: eth1: Gained IPv6LL Dec 13 14:10:14.877581 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 14:10:15.204467 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:10:15.204467 ignition[955]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 14:10:15.206982 ignition[955]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 14:10:15.206982 ignition[955]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 14:10:15.206982 ignition[955]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 14:10:15.206982 ignition[955]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:10:15.206982 ignition[955]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:10:15.206982 ignition[955]: INFO : files: files passed Dec 13 14:10:15.206982 ignition[955]: INFO : Ignition finished successfully Dec 13 14:10:15.208579 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 14:10:15.216966 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 14:10:15.220922 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 14:10:15.223901 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:10:15.224040 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 14:10:15.236618 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:10:15.236618 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:10:15.239378 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:10:15.241520 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:10:15.242961 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 14:10:15.251980 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 14:10:15.298554 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:10:15.298736 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 14:10:15.301017 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 14:10:15.302669 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 14:10:15.303359 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 14:10:15.310015 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 14:10:15.323482 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:10:15.331999 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 14:10:15.349706 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:10:15.350421 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:10:15.353371 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 14:10:15.355562 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:10:15.355840 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 14:10:15.358023 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 14:10:15.360066 systemd[1]: Stopped target basic.target - Basic System. Dec 13 14:10:15.361792 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 14:10:15.363459 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 14:10:15.364453 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 14:10:15.365536 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 14:10:15.366536 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 14:10:15.367650 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 14:10:15.368760 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 14:10:15.369732 systemd[1]: Stopped target swap.target - Swaps. Dec 13 14:10:15.370578 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:10:15.370724 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 14:10:15.371937 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:10:15.372571 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:10:15.373609 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 14:10:15.376782 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:10:15.377897 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:10:15.378095 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 14:10:15.380104 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:10:15.380256 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 14:10:15.381451 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:10:15.381538 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 14:10:15.382477 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 14:10:15.382570 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 14:10:15.395198 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 14:10:15.398996 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 14:10:15.399642 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:10:15.399833 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:10:15.404817 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:10:15.404937 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 14:10:15.414480 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:10:15.415181 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 14:10:15.422488 ignition[1008]: INFO : Ignition 2.20.0 Dec 13 14:10:15.422488 ignition[1008]: INFO : Stage: umount Dec 13 14:10:15.422488 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:10:15.422488 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 14:10:15.429618 ignition[1008]: INFO : umount: umount passed Dec 13 14:10:15.429618 ignition[1008]: INFO : Ignition finished successfully Dec 13 14:10:15.425995 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:10:15.428917 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:10:15.429022 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 14:10:15.431875 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:10:15.431986 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 14:10:15.437340 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:10:15.437408 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 14:10:15.438012 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:10:15.438048 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 14:10:15.441242 systemd[1]: Stopped target network.target - Network. Dec 13 14:10:15.441689 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:10:15.441771 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 14:10:15.444668 systemd[1]: Stopped target paths.target - Path Units. Dec 13 14:10:15.445817 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:10:15.446471 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:10:15.447380 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 14:10:15.447992 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 14:10:15.449270 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:10:15.449331 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 14:10:15.450804 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:10:15.450841 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 14:10:15.451751 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:10:15.451808 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 14:10:15.452794 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 14:10:15.452844 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 14:10:15.455768 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 14:10:15.456921 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 14:10:15.460773 systemd-networkd[777]: eth1: DHCPv6 lease lost Dec 13 14:10:15.464275 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:10:15.464531 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 14:10:15.465936 systemd-networkd[777]: eth0: DHCPv6 lease lost Dec 13 14:10:15.469578 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 14:10:15.469667 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:10:15.474856 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:10:15.476385 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 14:10:15.479009 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:10:15.479251 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 14:10:15.482974 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:10:15.483042 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:10:15.484228 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:10:15.484287 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 14:10:15.498892 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 14:10:15.500270 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:10:15.500386 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 14:10:15.501807 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:10:15.501857 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:10:15.503048 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:10:15.503093 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 14:10:15.504474 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:10:15.517156 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:10:15.517292 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 14:10:15.526209 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:10:15.526591 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:10:15.529093 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:10:15.529169 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 14:10:15.530355 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:10:15.530394 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:10:15.531589 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:10:15.531647 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 14:10:15.533282 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:10:15.533327 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 14:10:15.534651 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:10:15.534707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 14:10:15.541963 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 14:10:15.543037 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:10:15.543183 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:10:15.546438 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 14:10:15.546497 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:10:15.548861 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:10:15.548923 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:10:15.549852 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:10:15.549897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:10:15.551739 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:10:15.553561 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 14:10:15.555503 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 14:10:15.567739 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 14:10:15.577098 systemd[1]: Switching root. Dec 13 14:10:15.614892 systemd-journald[237]: Journal stopped Dec 13 14:10:16.522769 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 14:10:16.522839 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:10:16.522851 kernel: SELinux: policy capability open_perms=1 Dec 13 14:10:16.522861 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:10:16.522870 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:10:16.522879 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:10:16.522889 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:10:16.522899 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:10:16.522910 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:10:16.522919 kernel: audit: type=1403 audit(1734099015.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:10:16.522930 systemd[1]: Successfully loaded SELinux policy in 34.658ms. Dec 13 14:10:16.522950 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.996ms. Dec 13 14:10:16.522961 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 14:10:16.522972 systemd[1]: Detected virtualization kvm. Dec 13 14:10:16.522982 systemd[1]: Detected architecture arm64. Dec 13 14:10:16.522992 systemd[1]: Detected first boot. Dec 13 14:10:16.523004 systemd[1]: Hostname set to . Dec 13 14:10:16.523014 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:10:16.523024 zram_generator::config[1050]: No configuration found. Dec 13 14:10:16.523035 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:10:16.523046 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:10:16.523056 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 14:10:16.523067 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:10:16.523077 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 14:10:16.523090 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 14:10:16.523100 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 14:10:16.523121 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 14:10:16.523134 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 14:10:16.523145 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 14:10:16.523156 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 14:10:16.523166 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 14:10:16.523176 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 14:10:16.523186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 14:10:16.523200 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 14:10:16.523210 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 14:10:16.523221 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 14:10:16.523231 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 14:10:16.523241 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 14:10:16.523252 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 14:10:16.523262 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 14:10:16.523274 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 14:10:16.523284 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 14:10:16.523294 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 14:10:16.523305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 14:10:16.523319 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 14:10:16.523329 systemd[1]: Reached target slices.target - Slice Units. Dec 13 14:10:16.523340 systemd[1]: Reached target swap.target - Swaps. Dec 13 14:10:16.523350 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 14:10:16.523361 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 14:10:16.523371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 14:10:16.523381 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 14:10:16.523395 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 14:10:16.523412 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 14:10:16.523423 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 14:10:16.523433 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 14:10:16.523443 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 14:10:16.523453 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 14:10:16.523464 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 14:10:16.523475 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 14:10:16.523485 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:10:16.523495 systemd[1]: Reached target machines.target - Containers. Dec 13 14:10:16.523506 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 14:10:16.523516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:10:16.523527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 14:10:16.523537 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 14:10:16.523548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:10:16.523558 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 14:10:16.523569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:10:16.523579 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 14:10:16.523589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:10:16.523600 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:10:16.523611 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:10:16.523621 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 14:10:16.523632 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:10:16.523644 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:10:16.523654 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 14:10:16.523664 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 14:10:16.523675 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 14:10:16.523687 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 14:10:16.523752 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 14:10:16.523769 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:10:16.523779 systemd[1]: Stopped verity-setup.service. Dec 13 14:10:16.523790 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 14:10:16.523800 kernel: fuse: init (API version 7.39) Dec 13 14:10:16.523809 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 14:10:16.523819 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 14:10:16.523830 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 14:10:16.523840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 14:10:16.523851 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 14:10:16.523862 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 14:10:16.523872 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:10:16.523882 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 14:10:16.523896 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:10:16.523908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:10:16.523919 kernel: ACPI: bus type drm_connector registered Dec 13 14:10:16.523929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:10:16.523939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:10:16.523951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:10:16.523962 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 14:10:16.523973 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 14:10:16.523983 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:10:16.523994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 14:10:16.524004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 14:10:16.524014 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 14:10:16.524024 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 14:10:16.524034 kernel: loop: module loaded Dec 13 14:10:16.524069 systemd-journald[1117]: Collecting audit messages is disabled. Dec 13 14:10:16.524094 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:10:16.524105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:10:16.524153 systemd-journald[1117]: Journal started Dec 13 14:10:16.524183 systemd-journald[1117]: Runtime Journal (/run/log/journal/5b8b5bff283c4ee0af9cddd6c52ee8f9) is 8.0M, max 76.5M, 68.5M free. Dec 13 14:10:16.245318 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:10:16.271510 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 14:10:16.272090 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:10:16.525785 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 14:10:16.537343 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 14:10:16.544929 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 14:10:16.551889 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 14:10:16.554528 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:10:16.554581 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 14:10:16.556251 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 14:10:16.560922 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 14:10:16.562810 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 14:10:16.563466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:10:16.570965 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 14:10:16.575326 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 14:10:16.576820 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:10:16.585037 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 14:10:16.586057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:10:16.591244 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 14:10:16.604978 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 14:10:16.614348 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 14:10:16.619461 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 14:10:16.620266 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 14:10:16.624536 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 14:10:16.634832 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 14:10:16.650514 systemd-journald[1117]: Time spent on flushing to /var/log/journal/5b8b5bff283c4ee0af9cddd6c52ee8f9 is 90.328ms for 1112 entries. Dec 13 14:10:16.650514 systemd-journald[1117]: System Journal (/var/log/journal/5b8b5bff283c4ee0af9cddd6c52ee8f9) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:10:16.767241 kernel: loop0: detected capacity change from 0 to 113552 Dec 13 14:10:16.767306 systemd-journald[1117]: Received client request to flush runtime journal. Dec 13 14:10:16.767410 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:10:16.767435 kernel: loop1: detected capacity change from 0 to 8 Dec 13 14:10:16.767454 kernel: loop2: detected capacity change from 0 to 194512 Dec 13 14:10:16.650931 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 14:10:16.652682 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 14:10:16.655572 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 14:10:16.665960 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 14:10:16.700060 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:10:16.717345 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Dec 13 14:10:16.717363 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Dec 13 14:10:16.724482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 14:10:16.737151 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 14:10:16.746009 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 14:10:16.747388 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:10:16.749279 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 14:10:16.769222 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 14:10:16.814716 kernel: loop3: detected capacity change from 0 to 116784 Dec 13 14:10:16.821421 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 14:10:16.834915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 14:10:16.860983 kernel: loop4: detected capacity change from 0 to 113552 Dec 13 14:10:16.876444 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Dec 13 14:10:16.877387 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Dec 13 14:10:16.877755 kernel: loop5: detected capacity change from 0 to 8 Dec 13 14:10:16.880306 kernel: loop6: detected capacity change from 0 to 194512 Dec 13 14:10:16.888816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 14:10:16.901805 kernel: loop7: detected capacity change from 0 to 116784 Dec 13 14:10:16.915421 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 14:10:16.920213 (sd-merge)[1193]: Merged extensions into '/usr'. Dec 13 14:10:16.928067 systemd[1]: Reloading requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 14:10:16.928088 systemd[1]: Reloading... Dec 13 14:10:17.084736 zram_generator::config[1223]: No configuration found. Dec 13 14:10:17.182311 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:10:17.213620 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:10:17.259645 systemd[1]: Reloading finished in 330 ms. Dec 13 14:10:17.296520 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 14:10:17.299818 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 14:10:17.308035 systemd[1]: Starting ensure-sysext.service... Dec 13 14:10:17.312905 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 14:10:17.334118 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 14:10:17.336456 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Dec 13 14:10:17.336484 systemd[1]: Reloading... Dec 13 14:10:17.351469 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:10:17.351678 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 14:10:17.352394 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:10:17.352591 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 13 14:10:17.352637 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 13 14:10:17.357963 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 14:10:17.358157 systemd-tmpfiles[1258]: Skipping /boot Dec 13 14:10:17.367612 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 14:10:17.367796 systemd-tmpfiles[1258]: Skipping /boot Dec 13 14:10:17.411740 zram_generator::config[1284]: No configuration found. Dec 13 14:10:17.515650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:10:17.562259 systemd[1]: Reloading finished in 225 ms. Dec 13 14:10:17.584908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 14:10:17.596945 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 14:10:17.602910 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 14:10:17.606044 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 14:10:17.613963 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 14:10:17.617950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 14:10:17.630006 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 14:10:17.639208 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 14:10:17.643079 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:10:17.648329 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:10:17.657056 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:10:17.660195 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:10:17.661692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:10:17.664397 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 14:10:17.675141 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 14:10:17.676452 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:10:17.678452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:10:17.684435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:10:17.685296 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Dec 13 14:10:17.690078 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:10:17.690958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:10:17.698751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:10:17.704943 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 14:10:17.707125 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:10:17.710391 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 14:10:17.714161 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 14:10:17.717118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:10:17.717270 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:10:17.725940 systemd[1]: Finished ensure-sysext.service. Dec 13 14:10:17.727529 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:10:17.729019 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 14:10:17.730035 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:10:17.731882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:10:17.741533 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:10:17.756635 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 14:10:17.758042 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:10:17.758535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:10:17.764041 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:10:17.764309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 14:10:17.773058 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 14:10:17.787559 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 14:10:17.794788 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 14:10:17.801759 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:10:17.817113 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:10:17.820137 augenrules[1382]: No rules Dec 13 14:10:17.817300 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 14:10:17.896781 systemd-networkd[1365]: lo: Link UP Dec 13 14:10:17.896789 systemd-networkd[1365]: lo: Gained carrier Dec 13 14:10:17.899007 systemd-networkd[1365]: Enumeration completed Dec 13 14:10:17.899260 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 14:10:17.915586 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 14:10:17.916788 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 14:10:17.918708 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 14:10:17.918782 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 14:10:17.936065 systemd-resolved[1327]: Positive Trust Anchors: Dec 13 14:10:17.936086 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:10:17.936156 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 14:10:17.942488 systemd-resolved[1327]: Using system hostname 'ci-4186-0-0-5-82942f650b'. Dec 13 14:10:17.944435 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 14:10:17.945370 systemd[1]: Reached target network.target - Network. Dec 13 14:10:17.945871 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 14:10:17.972728 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1363) Dec 13 14:10:18.006142 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:10:18.006156 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:10:18.006893 systemd-networkd[1365]: eth0: Link UP Dec 13 14:10:18.006907 systemd-networkd[1365]: eth0: Gained carrier Dec 13 14:10:18.006924 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:10:18.008785 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1363) Dec 13 14:10:18.052150 systemd-networkd[1365]: eth0: DHCPv4 address 49.13.133.85/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 14:10:18.053924 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Dec 13 14:10:18.064969 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1398) Dec 13 14:10:18.065052 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:10:18.070467 systemd-networkd[1365]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:10:18.070482 systemd-networkd[1365]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:10:18.072006 systemd-networkd[1365]: eth1: Link UP Dec 13 14:10:18.072020 systemd-networkd[1365]: eth1: Gained carrier Dec 13 14:10:18.072040 systemd-networkd[1365]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 14:10:18.103784 systemd-networkd[1365]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:10:18.104752 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 14:10:18.105124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 14:10:18.111517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 14:10:18.115307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 14:10:18.118077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 14:10:18.119287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 14:10:18.120818 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:10:18.125654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:10:18.125853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 14:10:18.137559 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:10:18.139744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 14:10:18.142316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:10:18.143222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 14:10:18.146290 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:10:18.146376 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 14:10:18.180743 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 14:10:18.180814 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 14:10:18.180827 kernel: [drm] features: -context_init Dec 13 14:10:18.178009 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 14:10:18.181762 kernel: [drm] number of scanouts: 1 Dec 13 14:10:18.181867 kernel: [drm] number of cap sets: 0 Dec 13 14:10:18.185073 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 14:10:18.186725 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 14:10:18.190257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:10:18.197678 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 14:10:18.203106 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 14:10:18.631508 systemd-timesyncd[1357]: Contacted time server 85.220.190.246:123 (0.flatcar.pool.ntp.org). Dec 13 14:10:18.631569 systemd-timesyncd[1357]: Initial clock synchronization to Fri 2024-12-13 14:10:18.631382 UTC. Dec 13 14:10:18.631846 systemd-resolved[1327]: Clock change detected. Flushing caches. Dec 13 14:10:18.638365 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:10:18.640009 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:10:18.641081 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 14:10:18.650383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 14:10:18.737124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 14:10:18.760555 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 14:10:18.770302 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 14:10:18.784352 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:10:18.807787 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 14:10:18.810784 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 14:10:18.811666 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 14:10:18.812508 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 14:10:18.813587 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 14:10:18.814765 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 14:10:18.815500 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 14:10:18.816175 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 14:10:18.816848 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:10:18.816890 systemd[1]: Reached target paths.target - Path Units. Dec 13 14:10:18.817501 systemd[1]: Reached target timers.target - Timer Units. Dec 13 14:10:18.819390 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 14:10:18.821509 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 14:10:18.827238 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 14:10:18.830544 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 14:10:18.832399 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 14:10:18.833745 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 14:10:18.834381 systemd[1]: Reached target basic.target - Basic System. Dec 13 14:10:18.834977 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 14:10:18.835014 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 14:10:18.838174 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 14:10:18.841770 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:10:18.849146 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 14:10:18.853218 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 14:10:18.857149 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 14:10:18.868169 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 14:10:18.869191 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 14:10:18.871571 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 14:10:18.876207 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 14:10:18.881175 jq[1451]: false Dec 13 14:10:18.881303 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 14:10:18.885019 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 14:10:18.893382 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 14:10:18.895651 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:10:18.897282 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:10:18.909909 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 14:10:18.913021 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 14:10:18.917134 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 14:10:18.920746 extend-filesystems[1454]: Found loop4 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found loop5 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found loop6 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found loop7 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found sda Dec 13 14:10:18.923893 extend-filesystems[1454]: Found sda1 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found sda2 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found sda3 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found usr Dec 13 14:10:18.923893 extend-filesystems[1454]: Found sda4 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found sda6 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found sda7 Dec 13 14:10:18.923893 extend-filesystems[1454]: Found sda9 Dec 13 14:10:18.922140 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:10:18.961255 extend-filesystems[1454]: Checking size of /dev/sda9 Dec 13 14:10:18.922446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 14:10:18.956433 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:10:18.956683 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 14:10:18.970638 jq[1462]: true Dec 13 14:10:18.976585 dbus-daemon[1450]: [system] SELinux support is enabled Dec 13 14:10:18.978889 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 14:10:18.980651 coreos-metadata[1449]: Dec 13 14:10:18.978 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 14:10:18.987371 extend-filesystems[1454]: Resized partition /dev/sda9 Dec 13 14:10:18.986283 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:10:18.986316 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 14:10:18.988067 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:10:18.988089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 14:10:18.994828 extend-filesystems[1483]: resize2fs 1.47.1 (20-May-2024) Dec 13 14:10:18.997970 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 14:10:19.004896 coreos-metadata[1449]: Dec 13 14:10:19.004 INFO Fetch successful Dec 13 14:10:19.004896 coreos-metadata[1449]: Dec 13 14:10:19.004 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 14:10:19.010987 coreos-metadata[1449]: Dec 13 14:10:19.009 INFO Fetch successful Dec 13 14:10:19.019682 (ntainerd)[1482]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 14:10:19.024215 jq[1477]: true Dec 13 14:10:19.025102 update_engine[1461]: I20241213 14:10:19.023608 1461 main.cc:92] Flatcar Update Engine starting Dec 13 14:10:19.038798 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:10:19.039013 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 14:10:19.044423 systemd[1]: Started update-engine.service - Update Engine. Dec 13 14:10:19.046992 update_engine[1461]: I20241213 14:10:19.045768 1461 update_check_scheduler.cc:74] Next update check in 8m33s Dec 13 14:10:19.055389 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 14:10:19.111940 systemd-logind[1460]: New seat seat0. Dec 13 14:10:19.114887 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:10:19.114915 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 14:10:19.115204 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 14:10:19.148659 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 14:10:19.157058 extend-filesystems[1483]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 14:10:19.157058 extend-filesystems[1483]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 14:10:19.157058 extend-filesystems[1483]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 14:10:19.166188 extend-filesystems[1454]: Resized filesystem in /dev/sda9 Dec 13 14:10:19.166188 extend-filesystems[1454]: Found sr0 Dec 13 14:10:19.167069 bash[1513]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:10:19.174658 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:10:19.174839 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 14:10:19.176717 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 14:10:19.182691 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 14:10:19.187721 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 14:10:19.194292 systemd[1]: Starting sshkeys.service... Dec 13 14:10:19.210963 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1379) Dec 13 14:10:19.239936 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 14:10:19.243603 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 14:10:19.321198 coreos-metadata[1524]: Dec 13 14:10:19.321 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 14:10:19.323521 coreos-metadata[1524]: Dec 13 14:10:19.323 INFO Fetch successful Dec 13 14:10:19.326526 unknown[1524]: wrote ssh authorized keys file for user: core Dec 13 14:10:19.345318 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:10:19.360301 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:10:19.363480 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 14:10:19.371651 systemd[1]: Finished sshkeys.service. Dec 13 14:10:19.408743 containerd[1482]: time="2024-12-13T14:10:19.408559589Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 14:10:19.456875 containerd[1482]: time="2024-12-13T14:10:19.456600349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:10:19.465034 containerd[1482]: time="2024-12-13T14:10:19.464230669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:10:19.465034 containerd[1482]: time="2024-12-13T14:10:19.464444189Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:10:19.465034 containerd[1482]: time="2024-12-13T14:10:19.464563669Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:10:19.465545 containerd[1482]: time="2024-12-13T14:10:19.465498469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 14:10:19.465721 containerd[1482]: time="2024-12-13T14:10:19.465687949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 14:10:19.466046 containerd[1482]: time="2024-12-13T14:10:19.466005269Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:10:19.466173 containerd[1482]: time="2024-12-13T14:10:19.466140789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:10:19.466685 containerd[1482]: time="2024-12-13T14:10:19.466638389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:10:19.468001 containerd[1482]: time="2024-12-13T14:10:19.466796029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:10:19.468001 containerd[1482]: time="2024-12-13T14:10:19.466840669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:10:19.468001 containerd[1482]: time="2024-12-13T14:10:19.466866189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:10:19.468001 containerd[1482]: time="2024-12-13T14:10:19.467137429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:10:19.468001 containerd[1482]: time="2024-12-13T14:10:19.467621349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:10:19.468001 containerd[1482]: time="2024-12-13T14:10:19.467914669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:10:19.468538 containerd[1482]: time="2024-12-13T14:10:19.468516509Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:10:19.468747 containerd[1482]: time="2024-12-13T14:10:19.468672229Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:10:19.468873 containerd[1482]: time="2024-12-13T14:10:19.468857229Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:10:19.474056 containerd[1482]: time="2024-12-13T14:10:19.474001269Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:10:19.474271 containerd[1482]: time="2024-12-13T14:10:19.474245429Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:10:19.474494 containerd[1482]: time="2024-12-13T14:10:19.474467949Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 14:10:19.474614 containerd[1482]: time="2024-12-13T14:10:19.474589509Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 14:10:19.474714 containerd[1482]: time="2024-12-13T14:10:19.474692589Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:10:19.475065 containerd[1482]: time="2024-12-13T14:10:19.475030949Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:10:19.475710 containerd[1482]: time="2024-12-13T14:10:19.475681349Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476035109Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476072629Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476099629Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476121909Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476142869Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476164549Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476186949Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476221389Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476243709Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476266749Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476285909Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476322189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476421749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.476994 containerd[1482]: time="2024-12-13T14:10:19.476449309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476469589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476490509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476512869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476532069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476552469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476581469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476606349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476627349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476647429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476667629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476691509Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476733629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476756669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.477520 containerd[1482]: time="2024-12-13T14:10:19.476776909Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:10:19.478168 containerd[1482]: time="2024-12-13T14:10:19.478140269Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:10:19.478980 containerd[1482]: time="2024-12-13T14:10:19.478376949Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 14:10:19.478980 containerd[1482]: time="2024-12-13T14:10:19.478410109Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:10:19.478980 containerd[1482]: time="2024-12-13T14:10:19.478433069Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 14:10:19.478980 containerd[1482]: time="2024-12-13T14:10:19.478449269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.478980 containerd[1482]: time="2024-12-13T14:10:19.478480149Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 14:10:19.478980 containerd[1482]: time="2024-12-13T14:10:19.478498349Z" level=info msg="NRI interface is disabled by configuration." Dec 13 14:10:19.478980 containerd[1482]: time="2024-12-13T14:10:19.478515229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:10:19.479586 containerd[1482]: time="2024-12-13T14:10:19.479521629Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:10:19.479764 containerd[1482]: time="2024-12-13T14:10:19.479746869Z" level=info msg="Connect containerd service" Dec 13 14:10:19.479866 containerd[1482]: time="2024-12-13T14:10:19.479852229Z" level=info msg="using legacy CRI server" Dec 13 14:10:19.479921 containerd[1482]: time="2024-12-13T14:10:19.479907109Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 14:10:19.480314 containerd[1482]: time="2024-12-13T14:10:19.480292669Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:10:19.481436 containerd[1482]: time="2024-12-13T14:10:19.481317309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:10:19.482349 containerd[1482]: time="2024-12-13T14:10:19.481691269Z" level=info msg="Start subscribing containerd event" Dec 13 14:10:19.482349 containerd[1482]: time="2024-12-13T14:10:19.481771709Z" level=info msg="Start recovering state" Dec 13 14:10:19.482349 containerd[1482]: time="2024-12-13T14:10:19.481853789Z" level=info msg="Start event monitor" Dec 13 14:10:19.482349 containerd[1482]: time="2024-12-13T14:10:19.481869869Z" level=info msg="Start snapshots syncer" Dec 13 14:10:19.482349 containerd[1482]: time="2024-12-13T14:10:19.481883269Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:10:19.482349 containerd[1482]: time="2024-12-13T14:10:19.481892029Z" level=info msg="Start streaming server" Dec 13 14:10:19.483111 containerd[1482]: time="2024-12-13T14:10:19.483090149Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:10:19.483306 containerd[1482]: time="2024-12-13T14:10:19.483234949Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:10:19.484122 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 14:10:19.489126 containerd[1482]: time="2024-12-13T14:10:19.489083709Z" level=info msg="containerd successfully booted in 0.081613s" Dec 13 14:10:20.144490 systemd-networkd[1365]: eth0: Gained IPv6LL Dec 13 14:10:20.154714 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 14:10:20.157712 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 14:10:20.169909 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:20.172684 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 14:10:20.223062 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 14:10:20.272592 systemd-networkd[1365]: eth1: Gained IPv6LL Dec 13 14:10:20.896146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:20.907859 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:10:21.548092 kubelet[1554]: E1213 14:10:21.547993 1554 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:21.550279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:21.550430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:22.758901 sshd_keygen[1479]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:10:22.782789 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 14:10:22.791465 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 14:10:22.802540 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:10:22.803997 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 14:10:22.810400 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 14:10:22.825472 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 14:10:22.832567 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 14:10:22.846339 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 14:10:22.849025 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 14:10:22.849922 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 14:10:22.850773 systemd[1]: Startup finished in 751ms (kernel) + 5.077s (initrd) + 6.701s (userspace) = 12.530s. Dec 13 14:10:22.872878 agetty[1577]: failed to open credentials directory Dec 13 14:10:22.872985 agetty[1578]: failed to open credentials directory Dec 13 14:10:31.801343 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:10:31.812316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:31.921568 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:31.926865 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:10:31.989127 kubelet[1591]: E1213 14:10:31.989029 1591 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:31.994143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:31.994501 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:42.003430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:10:42.019370 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:42.124930 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:42.136665 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:10:42.191966 kubelet[1607]: E1213 14:10:42.191887 1607 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:42.195910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:42.196185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:10:52.252788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:10:52.262303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:10:52.382201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:10:52.401571 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:10:52.462872 kubelet[1623]: E1213 14:10:52.462800 1623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:10:52.465830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:10:52.466044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:11:02.503076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 14:11:02.519354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:11:02.654294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:11:02.654894 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:11:02.711615 kubelet[1639]: E1213 14:11:02.711496 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:11:02.714705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:11:02.714974 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:11:04.332173 update_engine[1461]: I20241213 14:11:04.331259 1461 update_attempter.cc:509] Updating boot flags... Dec 13 14:11:04.394048 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1656) Dec 13 14:11:04.457979 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1657) Dec 13 14:11:04.527045 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1657) Dec 13 14:11:06.986633 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 14:11:07.000900 systemd[1]: Started sshd@0-49.13.133.85:22-27.254.235.1:58930.service - OpenSSH per-connection server daemon (27.254.235.1:58930). Dec 13 14:11:08.021719 sshd[1669]: Invalid user diquest from 27.254.235.1 port 58930 Dec 13 14:11:08.215858 sshd[1669]: Received disconnect from 27.254.235.1 port 58930:11: Bye Bye [preauth] Dec 13 14:11:08.215858 sshd[1669]: Disconnected from invalid user diquest 27.254.235.1 port 58930 [preauth] Dec 13 14:11:08.218811 systemd[1]: sshd@0-49.13.133.85:22-27.254.235.1:58930.service: Deactivated successfully. Dec 13 14:11:12.752660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 14:11:12.761347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:11:12.868605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:11:12.880591 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:11:12.934724 kubelet[1681]: E1213 14:11:12.934578 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:11:12.938855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:11:12.939192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:11:18.502510 systemd[1]: Started sshd@1-49.13.133.85:22-128.199.252.176:48920.service - OpenSSH per-connection server daemon (128.199.252.176:48920). Dec 13 14:11:19.525683 sshd[1690]: Invalid user ftpuser1 from 128.199.252.176 port 48920 Dec 13 14:11:19.713133 sshd[1690]: Received disconnect from 128.199.252.176 port 48920:11: Bye Bye [preauth] Dec 13 14:11:19.713133 sshd[1690]: Disconnected from invalid user ftpuser1 128.199.252.176 port 48920 [preauth] Dec 13 14:11:19.716887 systemd[1]: sshd@1-49.13.133.85:22-128.199.252.176:48920.service: Deactivated successfully. Dec 13 14:11:22.100430 systemd[1]: Started sshd@2-49.13.133.85:22-182.151.63.201:60060.service - OpenSSH per-connection server daemon (182.151.63.201:60060). Dec 13 14:11:23.002483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 14:11:23.012282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:11:23.144276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:11:23.155619 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:11:23.214100 kubelet[1705]: E1213 14:11:23.214019 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:11:23.216686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:11:23.216826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:11:24.637973 sshd[1695]: Invalid user anik from 182.151.63.201 port 60060 Dec 13 14:11:24.962170 sshd[1695]: Received disconnect from 182.151.63.201 port 60060:11: Bye Bye [preauth] Dec 13 14:11:24.962170 sshd[1695]: Disconnected from invalid user anik 182.151.63.201 port 60060 [preauth] Dec 13 14:11:24.965828 systemd[1]: sshd@2-49.13.133.85:22-182.151.63.201:60060.service: Deactivated successfully. Dec 13 14:11:33.253217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 14:11:33.264282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:11:33.379485 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:11:33.396555 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:11:33.453972 kubelet[1724]: E1213 14:11:33.453865 1724 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:11:33.457295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:11:33.458005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:11:39.887521 systemd[1]: Started sshd@3-49.13.133.85:22-178.128.109.125:46458.service - OpenSSH per-connection server daemon (178.128.109.125:46458). Dec 13 14:11:40.898624 sshd[1733]: Invalid user developer from 178.128.109.125 port 46458 Dec 13 14:11:41.089123 sshd[1733]: Received disconnect from 178.128.109.125 port 46458:11: Bye Bye [preauth] Dec 13 14:11:41.089123 sshd[1733]: Disconnected from invalid user developer 178.128.109.125 port 46458 [preauth] Dec 13 14:11:41.093632 systemd[1]: sshd@3-49.13.133.85:22-178.128.109.125:46458.service: Deactivated successfully. Dec 13 14:11:43.502800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 14:11:43.510599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:11:43.624301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:11:43.635589 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:11:43.693366 kubelet[1745]: E1213 14:11:43.693293 1745 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:11:43.696757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:11:43.697102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:11:53.752740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 14:11:53.760926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:11:53.884847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:11:53.896557 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:11:53.948310 kubelet[1761]: E1213 14:11:53.948193 1761 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:11:53.951219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:11:53.951359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:11:58.860338 systemd[1]: Started sshd@4-49.13.133.85:22-14.103.122.180:46532.service - OpenSSH per-connection server daemon (14.103.122.180:46532). Dec 13 14:12:04.002475 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 14:12:04.009261 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:12:04.150227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:12:04.150707 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:12:04.210610 kubelet[1779]: E1213 14:12:04.210526 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:12:04.215457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:12:04.215892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:12:12.135531 systemd[1]: Started sshd@5-49.13.133.85:22-147.75.109.163:47720.service - OpenSSH per-connection server daemon (147.75.109.163:47720). Dec 13 14:12:13.132927 sshd[1788]: Accepted publickey for core from 147.75.109.163 port 47720 ssh2: RSA SHA256:Szwo06Oo4IMasO/ql4vXN2RM2hXm5LtgxNU2ypRvnZ8 Dec 13 14:12:13.136136 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:12:13.151023 systemd-logind[1460]: New session 1 of user core. Dec 13 14:12:13.151668 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 14:12:13.162464 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 14:12:13.177802 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 14:12:13.187560 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 14:12:13.191686 (systemd)[1792]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:12:13.305590 systemd[1792]: Queued start job for default target default.target. Dec 13 14:12:13.316699 systemd[1792]: Created slice app.slice - User Application Slice. Dec 13 14:12:13.316749 systemd[1792]: Reached target paths.target - Paths. Dec 13 14:12:13.316764 systemd[1792]: Reached target timers.target - Timers. Dec 13 14:12:13.318778 systemd[1792]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 14:12:13.335502 systemd[1792]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 14:12:13.335627 systemd[1792]: Reached target sockets.target - Sockets. Dec 13 14:12:13.335640 systemd[1792]: Reached target basic.target - Basic System. Dec 13 14:12:13.335686 systemd[1792]: Reached target default.target - Main User Target. Dec 13 14:12:13.335715 systemd[1792]: Startup finished in 135ms. Dec 13 14:12:13.336266 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 14:12:13.346296 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 14:12:14.039516 systemd[1]: Started sshd@6-49.13.133.85:22-147.75.109.163:47728.service - OpenSSH per-connection server daemon (147.75.109.163:47728). Dec 13 14:12:14.252415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 14:12:14.266565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:12:14.411735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:12:14.428684 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:12:14.491087 kubelet[1813]: E1213 14:12:14.491024 1813 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:12:14.493565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:12:14.493704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:12:15.016644 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 47728 ssh2: RSA SHA256:Szwo06Oo4IMasO/ql4vXN2RM2hXm5LtgxNU2ypRvnZ8 Dec 13 14:12:15.018864 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:12:15.026084 systemd-logind[1460]: New session 2 of user core. Dec 13 14:12:15.037306 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 14:12:15.692407 sshd[1821]: Connection closed by 147.75.109.163 port 47728 Dec 13 14:12:15.693433 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Dec 13 14:12:15.699369 systemd[1]: sshd@6-49.13.133.85:22-147.75.109.163:47728.service: Deactivated successfully. Dec 13 14:12:15.701785 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:12:15.702925 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:12:15.704571 systemd-logind[1460]: Removed session 2. Dec 13 14:12:15.871453 systemd[1]: Started sshd@7-49.13.133.85:22-147.75.109.163:47742.service - OpenSSH per-connection server daemon (147.75.109.163:47742). Dec 13 14:12:16.861456 sshd[1826]: Accepted publickey for core from 147.75.109.163 port 47742 ssh2: RSA SHA256:Szwo06Oo4IMasO/ql4vXN2RM2hXm5LtgxNU2ypRvnZ8 Dec 13 14:12:16.863219 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:12:16.871139 systemd-logind[1460]: New session 3 of user core. Dec 13 14:12:16.878283 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 14:12:17.539223 sshd[1828]: Connection closed by 147.75.109.163 port 47742 Dec 13 14:12:17.540582 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Dec 13 14:12:17.546456 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:12:17.547732 systemd[1]: sshd@7-49.13.133.85:22-147.75.109.163:47742.service: Deactivated successfully. Dec 13 14:12:17.550013 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:12:17.551123 systemd-logind[1460]: Removed session 3. Dec 13 14:12:17.725745 systemd[1]: Started sshd@8-49.13.133.85:22-147.75.109.163:33446.service - OpenSSH per-connection server daemon (147.75.109.163:33446). Dec 13 14:12:18.721497 sshd[1833]: Accepted publickey for core from 147.75.109.163 port 33446 ssh2: RSA SHA256:Szwo06Oo4IMasO/ql4vXN2RM2hXm5LtgxNU2ypRvnZ8 Dec 13 14:12:18.723760 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:12:18.729743 systemd-logind[1460]: New session 4 of user core. Dec 13 14:12:18.736352 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 14:12:19.415044 sshd[1835]: Connection closed by 147.75.109.163 port 33446 Dec 13 14:12:19.415660 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Dec 13 14:12:19.422609 systemd[1]: sshd@8-49.13.133.85:22-147.75.109.163:33446.service: Deactivated successfully. Dec 13 14:12:19.424750 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:12:19.425757 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:12:19.427020 systemd-logind[1460]: Removed session 4. Dec 13 14:12:19.599501 systemd[1]: Started sshd@9-49.13.133.85:22-147.75.109.163:33460.service - OpenSSH per-connection server daemon (147.75.109.163:33460). Dec 13 14:12:20.590497 sshd[1840]: Accepted publickey for core from 147.75.109.163 port 33460 ssh2: RSA SHA256:Szwo06Oo4IMasO/ql4vXN2RM2hXm5LtgxNU2ypRvnZ8 Dec 13 14:12:20.592387 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:12:20.598693 systemd-logind[1460]: New session 5 of user core. Dec 13 14:12:20.604403 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 14:12:21.127438 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:12:21.128392 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:12:21.145263 sudo[1843]: pam_unix(sudo:session): session closed for user root Dec 13 14:12:21.307480 sshd[1842]: Connection closed by 147.75.109.163 port 33460 Dec 13 14:12:21.308716 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Dec 13 14:12:21.313409 systemd[1]: sshd@9-49.13.133.85:22-147.75.109.163:33460.service: Deactivated successfully. Dec 13 14:12:21.315522 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:12:21.317427 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:12:21.319623 systemd-logind[1460]: Removed session 5. Dec 13 14:12:21.476305 systemd[1]: Started sshd@10-49.13.133.85:22-147.75.109.163:33474.service - OpenSSH per-connection server daemon (147.75.109.163:33474). Dec 13 14:12:22.469227 sshd[1848]: Accepted publickey for core from 147.75.109.163 port 33474 ssh2: RSA SHA256:Szwo06Oo4IMasO/ql4vXN2RM2hXm5LtgxNU2ypRvnZ8 Dec 13 14:12:22.471822 sshd-session[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:12:22.478271 systemd-logind[1460]: New session 6 of user core. Dec 13 14:12:22.487336 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 14:12:22.989036 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:12:22.989362 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:12:22.993669 sudo[1852]: pam_unix(sudo:session): session closed for user root Dec 13 14:12:22.999790 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 14:12:23.000320 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:12:23.029663 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 14:12:23.063662 augenrules[1874]: No rules Dec 13 14:12:23.064508 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:12:23.064684 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 14:12:23.066599 sudo[1851]: pam_unix(sudo:session): session closed for user root Dec 13 14:12:23.226071 sshd[1850]: Connection closed by 147.75.109.163 port 33474 Dec 13 14:12:23.226672 sshd-session[1848]: pam_unix(sshd:session): session closed for user core Dec 13 14:12:23.230936 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:12:23.232307 systemd[1]: sshd@10-49.13.133.85:22-147.75.109.163:33474.service: Deactivated successfully. Dec 13 14:12:23.235748 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:12:23.237838 systemd-logind[1460]: Removed session 6. Dec 13 14:12:23.400364 systemd[1]: Started sshd@11-49.13.133.85:22-147.75.109.163:33490.service - OpenSSH per-connection server daemon (147.75.109.163:33490). Dec 13 14:12:24.392491 sshd[1882]: Accepted publickey for core from 147.75.109.163 port 33490 ssh2: RSA SHA256:Szwo06Oo4IMasO/ql4vXN2RM2hXm5LtgxNU2ypRvnZ8 Dec 13 14:12:24.394271 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 14:12:24.399085 systemd-logind[1460]: New session 7 of user core. Dec 13 14:12:24.407358 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 14:12:24.503152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 14:12:24.512484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:12:24.680472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:12:24.692762 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 14:12:24.754425 kubelet[1893]: E1213 14:12:24.754188 1893 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:12:24.757318 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:12:24.757470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:12:24.917003 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:12:24.917312 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 14:12:25.613331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:12:25.628549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:12:25.665038 systemd[1]: Reloading requested from client PID 1940 ('systemctl') (unit session-7.scope)... Dec 13 14:12:25.665211 systemd[1]: Reloading... Dec 13 14:12:25.789972 zram_generator::config[1977]: No configuration found. Dec 13 14:12:25.914191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:12:25.982676 systemd[1]: Reloading finished in 317 ms. Dec 13 14:12:26.044041 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:12:26.044288 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:12:26.044805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:12:26.050468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 14:12:26.173897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 14:12:26.185542 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 14:12:26.240971 kubelet[2030]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:12:26.240971 kubelet[2030]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:12:26.240971 kubelet[2030]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:12:26.242309 kubelet[2030]: I1213 14:12:26.242207 2030 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:12:26.603933 kubelet[2030]: I1213 14:12:26.603857 2030 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:12:26.603933 kubelet[2030]: I1213 14:12:26.603928 2030 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:12:26.603933 kubelet[2030]: I1213 14:12:26.604724 2030 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:12:26.627133 kubelet[2030]: I1213 14:12:26.627029 2030 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:12:26.637552 kubelet[2030]: I1213 14:12:26.637522 2030 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:12:26.637772 kubelet[2030]: I1213 14:12:26.637761 2030 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:12:26.638019 kubelet[2030]: I1213 14:12:26.637999 2030 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:12:26.638136 kubelet[2030]: I1213 14:12:26.638027 2030 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:12:26.638136 kubelet[2030]: I1213 14:12:26.638037 2030 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:12:26.638186 kubelet[2030]: I1213 14:12:26.638170 2030 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:12:26.642136 kubelet[2030]: I1213 14:12:26.641718 2030 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:12:26.642136 kubelet[2030]: I1213 14:12:26.641759 2030 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:12:26.642136 kubelet[2030]: I1213 14:12:26.641789 2030 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:12:26.642136 kubelet[2030]: I1213 14:12:26.641802 2030 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:12:26.646455 kubelet[2030]: E1213 14:12:26.645274 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:26.646455 kubelet[2030]: E1213 14:12:26.645318 2030 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:26.646455 kubelet[2030]: I1213 14:12:26.646000 2030 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 14:12:26.647070 kubelet[2030]: I1213 14:12:26.647030 2030 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:12:26.647275 kubelet[2030]: W1213 14:12:26.647243 2030 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:12:26.650974 kubelet[2030]: I1213 14:12:26.648574 2030 server.go:1256] "Started kubelet" Dec 13 14:12:26.650974 kubelet[2030]: I1213 14:12:26.648864 2030 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:12:26.650974 kubelet[2030]: I1213 14:12:26.649756 2030 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:12:26.652100 kubelet[2030]: I1213 14:12:26.652058 2030 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:12:26.652390 kubelet[2030]: I1213 14:12:26.652362 2030 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:12:26.654903 kubelet[2030]: I1213 14:12:26.654875 2030 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:12:26.656644 kubelet[2030]: I1213 14:12:26.656625 2030 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:12:26.658402 kubelet[2030]: I1213 14:12:26.658316 2030 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:12:26.658402 kubelet[2030]: I1213 14:12:26.658395 2030 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:12:26.662348 kubelet[2030]: W1213 14:12:26.662315 2030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:12:26.662348 kubelet[2030]: E1213 14:12:26.662351 2030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:12:26.664976 kubelet[2030]: E1213 14:12:26.664906 2030 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.1810c1fc794b5914 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2024-12-13 14:12:26.64854146 +0000 UTC m=+0.457352363,LastTimestamp:2024-12-13 14:12:26.64854146 +0000 UTC m=+0.457352363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Dec 13 14:12:26.665763 kubelet[2030]: W1213 14:12:26.665224 2030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:12:26.665763 kubelet[2030]: E1213 14:12:26.665260 2030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:12:26.668548 kubelet[2030]: E1213 14:12:26.668521 2030 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:12:26.668939 kubelet[2030]: I1213 14:12:26.668914 2030 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:12:26.669111 kubelet[2030]: I1213 14:12:26.669059 2030 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:12:26.670635 kubelet[2030]: I1213 14:12:26.670605 2030 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:12:26.670836 kubelet[2030]: W1213 14:12:26.670820 2030 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:12:26.671319 kubelet[2030]: E1213 14:12:26.671278 2030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 14:12:26.672260 kubelet[2030]: E1213 14:12:26.671161 2030 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 14:12:26.688572 kubelet[2030]: E1213 14:12:26.688542 2030 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.1810c1fc7a7beaa3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2024-12-13 14:12:26.668501667 +0000 UTC m=+0.477312530,LastTimestamp:2024-12-13 14:12:26.668501667 +0000 UTC m=+0.477312530,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Dec 13 14:12:26.691944 kubelet[2030]: E1213 14:12:26.691915 2030 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.1810c1fc7bb98ec3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.4 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2024-12-13 14:12:26.689318595 +0000 UTC m=+0.498129458,LastTimestamp:2024-12-13 14:12:26.689318595 +0000 UTC m=+0.498129458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Dec 13 14:12:26.693618 kubelet[2030]: I1213 14:12:26.693594 2030 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:12:26.693618 kubelet[2030]: I1213 14:12:26.693615 2030 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:12:26.693618 kubelet[2030]: I1213 14:12:26.693630 2030 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:12:26.696939 kubelet[2030]: I1213 14:12:26.696899 2030 policy_none.go:49] "None policy: Start" Dec 13 14:12:26.697595 kubelet[2030]: I1213 14:12:26.697570 2030 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:12:26.697631 kubelet[2030]: I1213 14:12:26.697606 2030 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:12:26.707463 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 14:12:26.722582 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 14:12:26.727264 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 14:12:26.733159 kubelet[2030]: I1213 14:12:26.733131 2030 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:12:26.735028 kubelet[2030]: I1213 14:12:26.734947 2030 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:12:26.738508 kubelet[2030]: E1213 14:12:26.738474 2030 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Dec 13 14:12:26.740577 kubelet[2030]: I1213 14:12:26.740545 2030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:12:26.741813 kubelet[2030]: I1213 14:12:26.741783 2030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:12:26.741813 kubelet[2030]: I1213 14:12:26.741811 2030 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:12:26.742092 kubelet[2030]: I1213 14:12:26.741832 2030 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:12:26.742092 kubelet[2030]: E1213 14:12:26.742012 2030 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:12:26.766991 kubelet[2030]: I1213 14:12:26.766431 2030 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.4" Dec 13 14:12:26.776390 kubelet[2030]: I1213 14:12:26.776135 2030 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.4" Dec 13 14:12:26.788608 kubelet[2030]: E1213 14:12:26.788563 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:26.889215 kubelet[2030]: E1213 14:12:26.889052 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:26.989920 kubelet[2030]: E1213 14:12:26.989860 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:27.090996 kubelet[2030]: E1213 14:12:27.090886 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:27.192173 kubelet[2030]: E1213 14:12:27.191996 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:27.293259 kubelet[2030]: E1213 14:12:27.293187 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:27.393812 kubelet[2030]: E1213 14:12:27.393717 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:27.494854 kubelet[2030]: E1213 14:12:27.494674 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:27.595861 kubelet[2030]: E1213 14:12:27.595771 2030 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Dec 13 14:12:27.609435 kubelet[2030]: I1213 14:12:27.609247 2030 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:12:27.609531 kubelet[2030]: W1213 14:12:27.609478 2030 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:12:27.646941 kubelet[2030]: E1213 14:12:27.646360 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:27.698843 kubelet[2030]: I1213 14:12:27.698573 2030 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:12:27.699803 containerd[1482]: time="2024-12-13T14:12:27.699643546Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:12:27.700684 kubelet[2030]: I1213 14:12:27.700145 2030 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:12:27.710236 sudo[1902]: pam_unix(sudo:session): session closed for user root Dec 13 14:12:27.870045 sshd[1884]: Connection closed by 147.75.109.163 port 33490 Dec 13 14:12:27.870702 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Dec 13 14:12:27.875156 systemd[1]: sshd@11-49.13.133.85:22-147.75.109.163:33490.service: Deactivated successfully. Dec 13 14:12:27.878070 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:12:27.880420 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:12:27.881647 systemd-logind[1460]: Removed session 7. Dec 13 14:12:28.645390 kubelet[2030]: I1213 14:12:28.645325 2030 apiserver.go:52] "Watching apiserver" Dec 13 14:12:28.647304 kubelet[2030]: E1213 14:12:28.647251 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:28.652695 kubelet[2030]: I1213 14:12:28.652651 2030 topology_manager.go:215] "Topology Admit Handler" podUID="c08ca40c-0d85-4543-b959-ceb50289b0d6" podNamespace="calico-system" podName="calico-node-nmcd6" Dec 13 14:12:28.652857 kubelet[2030]: I1213 14:12:28.652758 2030 topology_manager.go:215] "Topology Admit Handler" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" podNamespace="calico-system" podName="csi-node-driver-jd54l" Dec 13 14:12:28.652857 kubelet[2030]: I1213 14:12:28.652847 2030 topology_manager.go:215] "Topology Admit Handler" podUID="93e8b93a-a531-4b6e-9e46-e4516a6e20db" podNamespace="kube-system" podName="kube-proxy-c7x2f" Dec 13 14:12:28.653943 kubelet[2030]: E1213 14:12:28.653801 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:28.658719 kubelet[2030]: I1213 14:12:28.658689 2030 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:12:28.661550 systemd[1]: Created slice kubepods-besteffort-pod93e8b93a_a531_4b6e_9e46_e4516a6e20db.slice - libcontainer container kubepods-besteffort-pod93e8b93a_a531_4b6e_9e46_e4516a6e20db.slice. Dec 13 14:12:28.670609 kubelet[2030]: I1213 14:12:28.670549 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93e8b93a-a531-4b6e-9e46-e4516a6e20db-xtables-lock\") pod \"kube-proxy-c7x2f\" (UID: \"93e8b93a-a531-4b6e-9e46-e4516a6e20db\") " pod="kube-system/kube-proxy-c7x2f" Dec 13 14:12:28.670609 kubelet[2030]: I1213 14:12:28.670596 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-cni-net-dir\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.670609 kubelet[2030]: I1213 14:12:28.670623 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c87203ee-871f-4f04-a281-8859d4bb2356-kubelet-dir\") pod \"csi-node-driver-jd54l\" (UID: \"c87203ee-871f-4f04-a281-8859d4bb2356\") " pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:28.670609 kubelet[2030]: I1213 14:12:28.670643 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c87203ee-871f-4f04-a281-8859d4bb2356-socket-dir\") pod \"csi-node-driver-jd54l\" (UID: \"c87203ee-871f-4f04-a281-8859d4bb2356\") " pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:28.671065 kubelet[2030]: I1213 14:12:28.670664 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93e8b93a-a531-4b6e-9e46-e4516a6e20db-kube-proxy\") pod \"kube-proxy-c7x2f\" (UID: \"93e8b93a-a531-4b6e-9e46-e4516a6e20db\") " pod="kube-system/kube-proxy-c7x2f" Dec 13 14:12:28.671065 kubelet[2030]: I1213 14:12:28.670683 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-lib-modules\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.671065 kubelet[2030]: I1213 14:12:28.670705 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c08ca40c-0d85-4543-b959-ceb50289b0d6-tigera-ca-bundle\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.671065 kubelet[2030]: I1213 14:12:28.670726 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-cni-bin-dir\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.671065 kubelet[2030]: I1213 14:12:28.670748 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw652\" (UniqueName: \"kubernetes.io/projected/93e8b93a-a531-4b6e-9e46-e4516a6e20db-kube-api-access-rw652\") pod \"kube-proxy-c7x2f\" (UID: \"93e8b93a-a531-4b6e-9e46-e4516a6e20db\") " pod="kube-system/kube-proxy-c7x2f" Dec 13 14:12:28.671310 kubelet[2030]: I1213 14:12:28.670768 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c87203ee-871f-4f04-a281-8859d4bb2356-registration-dir\") pod \"csi-node-driver-jd54l\" (UID: \"c87203ee-871f-4f04-a281-8859d4bb2356\") " pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:28.671310 kubelet[2030]: I1213 14:12:28.670788 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2sh\" (UniqueName: \"kubernetes.io/projected/c87203ee-871f-4f04-a281-8859d4bb2356-kube-api-access-5h2sh\") pod \"csi-node-driver-jd54l\" (UID: \"c87203ee-871f-4f04-a281-8859d4bb2356\") " pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:28.671310 kubelet[2030]: I1213 14:12:28.670814 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-xtables-lock\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.671310 kubelet[2030]: I1213 14:12:28.670868 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-policysync\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.671310 kubelet[2030]: I1213 14:12:28.670890 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c08ca40c-0d85-4543-b959-ceb50289b0d6-node-certs\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.671579 kubelet[2030]: I1213 14:12:28.670909 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcgmk\" (UniqueName: \"kubernetes.io/projected/c08ca40c-0d85-4543-b959-ceb50289b0d6-kube-api-access-pcgmk\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.671579 kubelet[2030]: I1213 14:12:28.670941 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c87203ee-871f-4f04-a281-8859d4bb2356-varrun\") pod \"csi-node-driver-jd54l\" (UID: \"c87203ee-871f-4f04-a281-8859d4bb2356\") " pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:28.671579 kubelet[2030]: I1213 14:12:28.671272 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93e8b93a-a531-4b6e-9e46-e4516a6e20db-lib-modules\") pod \"kube-proxy-c7x2f\" (UID: \"93e8b93a-a531-4b6e-9e46-e4516a6e20db\") " pod="kube-system/kube-proxy-c7x2f" Dec 13 14:12:28.671579 kubelet[2030]: I1213 14:12:28.671323 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-var-run-calico\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.671579 kubelet[2030]: I1213 14:12:28.671352 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-var-lib-calico\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.672050 kubelet[2030]: I1213 14:12:28.671373 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-cni-log-dir\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.672050 kubelet[2030]: I1213 14:12:28.671457 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c08ca40c-0d85-4543-b959-ceb50289b0d6-flexvol-driver-host\") pod \"calico-node-nmcd6\" (UID: \"c08ca40c-0d85-4543-b959-ceb50289b0d6\") " pod="calico-system/calico-node-nmcd6" Dec 13 14:12:28.678385 systemd[1]: Created slice kubepods-besteffort-podc08ca40c_0d85_4543_b959_ceb50289b0d6.slice - libcontainer container kubepods-besteffort-podc08ca40c_0d85_4543_b959_ceb50289b0d6.slice. Dec 13 14:12:28.786499 kubelet[2030]: E1213 14:12:28.786470 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:28.786982 kubelet[2030]: W1213 14:12:28.786644 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:28.786982 kubelet[2030]: E1213 14:12:28.786676 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:28.791904 kubelet[2030]: E1213 14:12:28.791185 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:28.791904 kubelet[2030]: W1213 14:12:28.791213 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:28.791904 kubelet[2030]: E1213 14:12:28.791239 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:28.800463 kubelet[2030]: E1213 14:12:28.799925 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:28.800463 kubelet[2030]: W1213 14:12:28.799988 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:28.800463 kubelet[2030]: E1213 14:12:28.800025 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:28.801053 kubelet[2030]: E1213 14:12:28.801029 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:28.801053 kubelet[2030]: W1213 14:12:28.801049 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:28.801139 kubelet[2030]: E1213 14:12:28.801068 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:28.979708 containerd[1482]: time="2024-12-13T14:12:28.977885860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7x2f,Uid:93e8b93a-a531-4b6e-9e46-e4516a6e20db,Namespace:kube-system,Attempt:0,}" Dec 13 14:12:28.983020 containerd[1482]: time="2024-12-13T14:12:28.982943022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nmcd6,Uid:c08ca40c-0d85-4543-b959-ceb50289b0d6,Namespace:calico-system,Attempt:0,}" Dec 13 14:12:29.607044 containerd[1482]: time="2024-12-13T14:12:29.606810148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:12:29.610140 containerd[1482]: time="2024-12-13T14:12:29.610065829Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 14:12:29.611422 containerd[1482]: time="2024-12-13T14:12:29.611359229Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:12:29.613300 containerd[1482]: time="2024-12-13T14:12:29.613228430Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:12:29.615311 containerd[1482]: time="2024-12-13T14:12:29.615216671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 14:12:29.619005 containerd[1482]: time="2024-12-13T14:12:29.617930711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 14:12:29.619005 containerd[1482]: time="2024-12-13T14:12:29.618657392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 640.637012ms" Dec 13 14:12:29.620493 containerd[1482]: time="2024-12-13T14:12:29.620298512Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 637.00109ms" Dec 13 14:12:29.647902 kubelet[2030]: E1213 14:12:29.647807 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:29.752987 containerd[1482]: time="2024-12-13T14:12:29.749766355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:12:29.752987 containerd[1482]: time="2024-12-13T14:12:29.749872275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:12:29.752987 containerd[1482]: time="2024-12-13T14:12:29.749893075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:12:29.752987 containerd[1482]: time="2024-12-13T14:12:29.750562635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:12:29.754224 containerd[1482]: time="2024-12-13T14:12:29.748763875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:12:29.754224 containerd[1482]: time="2024-12-13T14:12:29.748871595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:12:29.754224 containerd[1482]: time="2024-12-13T14:12:29.748908635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:12:29.754224 containerd[1482]: time="2024-12-13T14:12:29.749092435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:12:29.793986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022171660.mount: Deactivated successfully. Dec 13 14:12:29.839548 systemd[1]: Started cri-containerd-ac9a952ac6f5718b144c0444a1b088b2720c5444ffd1dbbec847458da6b6e96b.scope - libcontainer container ac9a952ac6f5718b144c0444a1b088b2720c5444ffd1dbbec847458da6b6e96b. Dec 13 14:12:29.851301 systemd[1]: Started cri-containerd-4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917.scope - libcontainer container 4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917. Dec 13 14:12:29.876087 containerd[1482]: time="2024-12-13T14:12:29.875775996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c7x2f,Uid:93e8b93a-a531-4b6e-9e46-e4516a6e20db,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac9a952ac6f5718b144c0444a1b088b2720c5444ffd1dbbec847458da6b6e96b\"" Dec 13 14:12:29.884933 containerd[1482]: time="2024-12-13T14:12:29.884424599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:12:29.891703 containerd[1482]: time="2024-12-13T14:12:29.891651202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nmcd6,Uid:c08ca40c-0d85-4543-b959-ceb50289b0d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917\"" Dec 13 14:12:30.648899 kubelet[2030]: E1213 14:12:30.648796 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:30.742927 kubelet[2030]: E1213 14:12:30.742488 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:31.283218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012923664.mount: Deactivated successfully. Dec 13 14:12:31.552441 containerd[1482]: time="2024-12-13T14:12:31.552253093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:31.553994 containerd[1482]: time="2024-12-13T14:12:31.553886454Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25274003" Dec 13 14:12:31.555985 containerd[1482]: time="2024-12-13T14:12:31.554837814Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:31.563007 containerd[1482]: time="2024-12-13T14:12:31.562929457Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.678437458s" Dec 13 14:12:31.563007 containerd[1482]: time="2024-12-13T14:12:31.562998297Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:12:31.564144 containerd[1482]: time="2024-12-13T14:12:31.564081217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:31.564410 containerd[1482]: time="2024-12-13T14:12:31.564385537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:12:31.566658 containerd[1482]: time="2024-12-13T14:12:31.566599738Z" level=info msg="CreateContainer within sandbox \"ac9a952ac6f5718b144c0444a1b088b2720c5444ffd1dbbec847458da6b6e96b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:12:31.587018 containerd[1482]: time="2024-12-13T14:12:31.586930104Z" level=info msg="CreateContainer within sandbox \"ac9a952ac6f5718b144c0444a1b088b2720c5444ffd1dbbec847458da6b6e96b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d4b7a95f952177163f94756803a1122c34bd3b37a5e1809f968cf173a29cad7\"" Dec 13 14:12:31.590098 containerd[1482]: time="2024-12-13T14:12:31.588202065Z" level=info msg="StartContainer for \"1d4b7a95f952177163f94756803a1122c34bd3b37a5e1809f968cf173a29cad7\"" Dec 13 14:12:31.618173 systemd[1]: Started cri-containerd-1d4b7a95f952177163f94756803a1122c34bd3b37a5e1809f968cf173a29cad7.scope - libcontainer container 1d4b7a95f952177163f94756803a1122c34bd3b37a5e1809f968cf173a29cad7. Dec 13 14:12:31.649328 kubelet[2030]: E1213 14:12:31.649277 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:31.653007 containerd[1482]: time="2024-12-13T14:12:31.651570165Z" level=info msg="StartContainer for \"1d4b7a95f952177163f94756803a1122c34bd3b37a5e1809f968cf173a29cad7\" returns successfully" Dec 13 14:12:31.778827 kubelet[2030]: I1213 14:12:31.778772 2030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-c7x2f" podStartSLOduration=4.098435067 podStartE2EDuration="5.778721005s" podCreationTimestamp="2024-12-13 14:12:26 +0000 UTC" firstStartedPulling="2024-12-13 14:12:29.883146759 +0000 UTC m=+3.691957622" lastFinishedPulling="2024-12-13 14:12:31.563432697 +0000 UTC m=+5.372243560" observedRunningTime="2024-12-13 14:12:31.778599645 +0000 UTC m=+5.587410548" watchObservedRunningTime="2024-12-13 14:12:31.778721005 +0000 UTC m=+5.587531868" Dec 13 14:12:31.783140 kubelet[2030]: E1213 14:12:31.783104 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.783332 kubelet[2030]: W1213 14:12:31.783312 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.783406 kubelet[2030]: E1213 14:12:31.783395 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.783815 kubelet[2030]: E1213 14:12:31.783790 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.783903 kubelet[2030]: W1213 14:12:31.783890 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.784000 kubelet[2030]: E1213 14:12:31.783986 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.784404 kubelet[2030]: E1213 14:12:31.784388 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.784492 kubelet[2030]: W1213 14:12:31.784479 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.784555 kubelet[2030]: E1213 14:12:31.784545 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.784891 kubelet[2030]: E1213 14:12:31.784877 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.784985 kubelet[2030]: W1213 14:12:31.784971 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.785063 kubelet[2030]: E1213 14:12:31.785052 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.785396 kubelet[2030]: E1213 14:12:31.785384 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.785469 kubelet[2030]: W1213 14:12:31.785457 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.785527 kubelet[2030]: E1213 14:12:31.785518 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.785838 kubelet[2030]: E1213 14:12:31.785823 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.785917 kubelet[2030]: W1213 14:12:31.785905 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.786012 kubelet[2030]: E1213 14:12:31.786001 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.786575 kubelet[2030]: E1213 14:12:31.786464 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.786575 kubelet[2030]: W1213 14:12:31.786477 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.786575 kubelet[2030]: E1213 14:12:31.786491 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.786855 kubelet[2030]: E1213 14:12:31.786762 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.786855 kubelet[2030]: W1213 14:12:31.786774 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.786855 kubelet[2030]: E1213 14:12:31.786788 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.787403 kubelet[2030]: E1213 14:12:31.787278 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.787403 kubelet[2030]: W1213 14:12:31.787292 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.787403 kubelet[2030]: E1213 14:12:31.787307 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.787581 kubelet[2030]: E1213 14:12:31.787568 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.787693 kubelet[2030]: W1213 14:12:31.787633 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.787693 kubelet[2030]: E1213 14:12:31.787653 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.788171 kubelet[2030]: E1213 14:12:31.788088 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.788171 kubelet[2030]: W1213 14:12:31.788105 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.788171 kubelet[2030]: E1213 14:12:31.788121 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.788657 kubelet[2030]: E1213 14:12:31.788529 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.788657 kubelet[2030]: W1213 14:12:31.788542 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.788657 kubelet[2030]: E1213 14:12:31.788554 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.788925 kubelet[2030]: E1213 14:12:31.788855 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.788925 kubelet[2030]: W1213 14:12:31.788872 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.788925 kubelet[2030]: E1213 14:12:31.788885 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.789349 kubelet[2030]: E1213 14:12:31.789256 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.789349 kubelet[2030]: W1213 14:12:31.789267 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.789349 kubelet[2030]: E1213 14:12:31.789281 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.789680 kubelet[2030]: E1213 14:12:31.789562 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.789680 kubelet[2030]: W1213 14:12:31.789574 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.789680 kubelet[2030]: E1213 14:12:31.789587 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.789843 kubelet[2030]: E1213 14:12:31.789832 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.789993 kubelet[2030]: W1213 14:12:31.789886 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.789993 kubelet[2030]: E1213 14:12:31.789930 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.790304 kubelet[2030]: E1213 14:12:31.790243 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.790304 kubelet[2030]: W1213 14:12:31.790255 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.790304 kubelet[2030]: E1213 14:12:31.790267 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.790639 kubelet[2030]: E1213 14:12:31.790537 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.790639 kubelet[2030]: W1213 14:12:31.790548 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.790639 kubelet[2030]: E1213 14:12:31.790560 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.790897 kubelet[2030]: E1213 14:12:31.790783 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.790897 kubelet[2030]: W1213 14:12:31.790833 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.790897 kubelet[2030]: E1213 14:12:31.790849 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.791171 kubelet[2030]: E1213 14:12:31.791151 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.791200 kubelet[2030]: W1213 14:12:31.791172 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.791200 kubelet[2030]: E1213 14:12:31.791189 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.791510 kubelet[2030]: E1213 14:12:31.791496 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.791510 kubelet[2030]: W1213 14:12:31.791507 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.791583 kubelet[2030]: E1213 14:12:31.791519 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.791708 kubelet[2030]: E1213 14:12:31.791692 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.791708 kubelet[2030]: W1213 14:12:31.791706 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.791762 kubelet[2030]: E1213 14:12:31.791725 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.792007 kubelet[2030]: E1213 14:12:31.791994 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.792007 kubelet[2030]: W1213 14:12:31.792007 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.792088 kubelet[2030]: E1213 14:12:31.792029 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.792208 kubelet[2030]: E1213 14:12:31.792197 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.792208 kubelet[2030]: W1213 14:12:31.792208 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.792262 kubelet[2030]: E1213 14:12:31.792225 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.792611 kubelet[2030]: E1213 14:12:31.792548 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.792611 kubelet[2030]: W1213 14:12:31.792582 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.792675 kubelet[2030]: E1213 14:12:31.792615 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.793010 kubelet[2030]: E1213 14:12:31.792991 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.793065 kubelet[2030]: W1213 14:12:31.793013 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.793310 kubelet[2030]: E1213 14:12:31.793114 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.793310 kubelet[2030]: E1213 14:12:31.793293 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.793310 kubelet[2030]: W1213 14:12:31.793307 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.793399 kubelet[2030]: E1213 14:12:31.793346 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.793754 kubelet[2030]: E1213 14:12:31.793734 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.793790 kubelet[2030]: W1213 14:12:31.793761 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.793834 kubelet[2030]: E1213 14:12:31.793816 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.794172 kubelet[2030]: E1213 14:12:31.794154 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.794225 kubelet[2030]: W1213 14:12:31.794174 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.794225 kubelet[2030]: E1213 14:12:31.794212 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.794536 kubelet[2030]: E1213 14:12:31.794519 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.794575 kubelet[2030]: W1213 14:12:31.794538 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.794600 kubelet[2030]: E1213 14:12:31.794576 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.795239 kubelet[2030]: E1213 14:12:31.795216 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.795283 kubelet[2030]: W1213 14:12:31.795245 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.795316 kubelet[2030]: E1213 14:12:31.795283 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:31.795634 kubelet[2030]: E1213 14:12:31.795612 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:31.795674 kubelet[2030]: W1213 14:12:31.795636 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:31.795674 kubelet[2030]: E1213 14:12:31.795659 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.651691 kubelet[2030]: E1213 14:12:32.651620 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:32.743048 kubelet[2030]: E1213 14:12:32.742612 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:32.799451 kubelet[2030]: E1213 14:12:32.799149 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.799451 kubelet[2030]: W1213 14:12:32.799197 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.799451 kubelet[2030]: E1213 14:12:32.799247 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.800504 kubelet[2030]: E1213 14:12:32.799991 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.800504 kubelet[2030]: W1213 14:12:32.800156 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.800504 kubelet[2030]: E1213 14:12:32.800194 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.800997 kubelet[2030]: E1213 14:12:32.800937 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.801145 kubelet[2030]: W1213 14:12:32.801115 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.801468 kubelet[2030]: E1213 14:12:32.801266 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.802131 kubelet[2030]: E1213 14:12:32.801804 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.802131 kubelet[2030]: W1213 14:12:32.801825 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.802131 kubelet[2030]: E1213 14:12:32.801843 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.803353 kubelet[2030]: E1213 14:12:32.803310 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.803902 kubelet[2030]: W1213 14:12:32.803638 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.803902 kubelet[2030]: E1213 14:12:32.803693 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.804378 kubelet[2030]: E1213 14:12:32.804347 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.804873 kubelet[2030]: W1213 14:12:32.804535 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.804873 kubelet[2030]: E1213 14:12:32.804580 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.805206 kubelet[2030]: E1213 14:12:32.805176 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.805347 kubelet[2030]: W1213 14:12:32.805316 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.805483 kubelet[2030]: E1213 14:12:32.805462 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.806277 kubelet[2030]: E1213 14:12:32.806234 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.806932 kubelet[2030]: W1213 14:12:32.806491 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.806932 kubelet[2030]: E1213 14:12:32.806550 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.807350 kubelet[2030]: E1213 14:12:32.807317 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.807491 kubelet[2030]: W1213 14:12:32.807463 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.807705 kubelet[2030]: E1213 14:12:32.807635 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.808364 kubelet[2030]: E1213 14:12:32.808206 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.808364 kubelet[2030]: W1213 14:12:32.808220 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.808364 kubelet[2030]: E1213 14:12:32.808235 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.808763 kubelet[2030]: E1213 14:12:32.808738 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.808992 kubelet[2030]: W1213 14:12:32.808914 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.809103 kubelet[2030]: E1213 14:12:32.809019 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.809383 kubelet[2030]: E1213 14:12:32.809360 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.809466 kubelet[2030]: W1213 14:12:32.809387 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.809466 kubelet[2030]: E1213 14:12:32.809417 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.810119 kubelet[2030]: E1213 14:12:32.809993 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.810119 kubelet[2030]: W1213 14:12:32.810027 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.810119 kubelet[2030]: E1213 14:12:32.810057 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.810618 kubelet[2030]: E1213 14:12:32.810440 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.810618 kubelet[2030]: W1213 14:12:32.810468 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.810618 kubelet[2030]: E1213 14:12:32.810495 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.810901 kubelet[2030]: E1213 14:12:32.810820 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.810901 kubelet[2030]: W1213 14:12:32.810863 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.810901 kubelet[2030]: E1213 14:12:32.810895 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.811292 kubelet[2030]: E1213 14:12:32.811271 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.811351 kubelet[2030]: W1213 14:12:32.811295 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.811351 kubelet[2030]: E1213 14:12:32.811333 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.811670 kubelet[2030]: E1213 14:12:32.811614 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.811670 kubelet[2030]: W1213 14:12:32.811629 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.811670 kubelet[2030]: E1213 14:12:32.811652 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.812029 kubelet[2030]: E1213 14:12:32.811938 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.812029 kubelet[2030]: W1213 14:12:32.811974 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.812029 kubelet[2030]: E1213 14:12:32.811987 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.812153 kubelet[2030]: E1213 14:12:32.812140 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.812153 kubelet[2030]: W1213 14:12:32.812150 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.812219 kubelet[2030]: E1213 14:12:32.812161 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.812304 kubelet[2030]: E1213 14:12:32.812292 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.812304 kubelet[2030]: W1213 14:12:32.812302 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.812366 kubelet[2030]: E1213 14:12:32.812313 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.812645 kubelet[2030]: E1213 14:12:32.812630 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.812645 kubelet[2030]: W1213 14:12:32.812643 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.812758 kubelet[2030]: E1213 14:12:32.812656 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.812885 kubelet[2030]: E1213 14:12:32.812871 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.812885 kubelet[2030]: W1213 14:12:32.812882 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.812993 kubelet[2030]: E1213 14:12:32.812898 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.813116 kubelet[2030]: E1213 14:12:32.813105 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.813116 kubelet[2030]: W1213 14:12:32.813115 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.813188 kubelet[2030]: E1213 14:12:32.813130 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.813268 kubelet[2030]: E1213 14:12:32.813257 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.813268 kubelet[2030]: W1213 14:12:32.813267 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.813335 kubelet[2030]: E1213 14:12:32.813281 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.813407 kubelet[2030]: E1213 14:12:32.813397 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.813407 kubelet[2030]: W1213 14:12:32.813406 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.813469 kubelet[2030]: E1213 14:12:32.813422 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.813582 kubelet[2030]: E1213 14:12:32.813573 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.813582 kubelet[2030]: W1213 14:12:32.813582 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.813704 kubelet[2030]: E1213 14:12:32.813597 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.813942 kubelet[2030]: E1213 14:12:32.813929 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.813942 kubelet[2030]: W1213 14:12:32.813940 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.814136 kubelet[2030]: E1213 14:12:32.814024 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.814136 kubelet[2030]: E1213 14:12:32.814098 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.814136 kubelet[2030]: W1213 14:12:32.814106 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.814136 kubelet[2030]: E1213 14:12:32.814117 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.814281 kubelet[2030]: E1213 14:12:32.814247 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.814281 kubelet[2030]: W1213 14:12:32.814253 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.814281 kubelet[2030]: E1213 14:12:32.814271 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.814429 kubelet[2030]: E1213 14:12:32.814402 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.814429 kubelet[2030]: W1213 14:12:32.814409 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.814429 kubelet[2030]: E1213 14:12:32.814421 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.814614 kubelet[2030]: E1213 14:12:32.814570 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.814614 kubelet[2030]: W1213 14:12:32.814578 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.814614 kubelet[2030]: E1213 14:12:32.814589 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:32.814944 kubelet[2030]: E1213 14:12:32.814930 2030 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:12:32.815010 kubelet[2030]: W1213 14:12:32.814944 2030 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:12:32.815010 kubelet[2030]: E1213 14:12:32.815001 2030 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:12:33.098776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684827145.mount: Deactivated successfully. Dec 13 14:12:33.551272 containerd[1482]: time="2024-12-13T14:12:33.551210668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:33.552654 containerd[1482]: time="2024-12-13T14:12:33.552421828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Dec 13 14:12:33.553887 containerd[1482]: time="2024-12-13T14:12:33.553417148Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:33.556288 containerd[1482]: time="2024-12-13T14:12:33.556242029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:33.557991 containerd[1482]: time="2024-12-13T14:12:33.557928790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.993368853s" Dec 13 14:12:33.558096 containerd[1482]: time="2024-12-13T14:12:33.557990750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 14:12:33.561240 containerd[1482]: time="2024-12-13T14:12:33.561200191Z" level=info msg="CreateContainer within sandbox \"4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:12:33.580896 containerd[1482]: time="2024-12-13T14:12:33.580835396Z" level=info msg="CreateContainer within sandbox \"4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd\"" Dec 13 14:12:33.582909 containerd[1482]: time="2024-12-13T14:12:33.581408317Z" level=info msg="StartContainer for \"fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd\"" Dec 13 14:12:33.617261 systemd[1]: Started cri-containerd-fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd.scope - libcontainer container fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd. Dec 13 14:12:33.643679 systemd[1]: Started sshd@12-49.13.133.85:22-61.190.114.203:33294.service - OpenSSH per-connection server daemon (61.190.114.203:33294). Dec 13 14:12:33.652632 kubelet[2030]: E1213 14:12:33.652583 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:33.677315 containerd[1482]: time="2024-12-13T14:12:33.677265705Z" level=info msg="StartContainer for \"fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd\" returns successfully" Dec 13 14:12:33.690618 systemd[1]: cri-containerd-fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd.scope: Deactivated successfully. Dec 13 14:12:33.823838 containerd[1482]: time="2024-12-13T14:12:33.823373109Z" level=info msg="shim disconnected" id=fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd namespace=k8s.io Dec 13 14:12:33.823838 containerd[1482]: time="2024-12-13T14:12:33.823517669Z" level=warning msg="cleaning up after shim disconnected" id=fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd namespace=k8s.io Dec 13 14:12:33.823838 containerd[1482]: time="2024-12-13T14:12:33.823530309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:12:34.077852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd772a44bec079bfb3e617f79a5643e547cbc9f28bbfc647ef9b7d39df5e49bd-rootfs.mount: Deactivated successfully. Dec 13 14:12:34.653652 kubelet[2030]: E1213 14:12:34.653569 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:34.743619 kubelet[2030]: E1213 14:12:34.743036 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:34.775629 containerd[1482]: time="2024-12-13T14:12:34.775500351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:12:35.654693 kubelet[2030]: E1213 14:12:35.654654 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:36.654917 kubelet[2030]: E1213 14:12:36.654854 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:36.744899 kubelet[2030]: E1213 14:12:36.743085 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:37.078620 sshd[2431]: Invalid user git from 61.190.114.203 port 33294 Dec 13 14:12:37.352551 sshd[2431]: Received disconnect from 61.190.114.203 port 33294:11: Bye Bye [preauth] Dec 13 14:12:37.352551 sshd[2431]: Disconnected from invalid user git 61.190.114.203 port 33294 [preauth] Dec 13 14:12:37.354894 systemd[1]: sshd@12-49.13.133.85:22-61.190.114.203:33294.service: Deactivated successfully. Dec 13 14:12:37.656129 kubelet[2030]: E1213 14:12:37.655894 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:38.656540 kubelet[2030]: E1213 14:12:38.656489 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:38.743654 kubelet[2030]: E1213 14:12:38.742643 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:39.113159 containerd[1482]: time="2024-12-13T14:12:39.113108359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:39.116423 containerd[1482]: time="2024-12-13T14:12:39.116365440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 14:12:39.118015 containerd[1482]: time="2024-12-13T14:12:39.117975881Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:39.121859 containerd[1482]: time="2024-12-13T14:12:39.121805602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:39.122849 containerd[1482]: time="2024-12-13T14:12:39.122594802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.347052331s" Dec 13 14:12:39.122849 containerd[1482]: time="2024-12-13T14:12:39.122757202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 14:12:39.127768 containerd[1482]: time="2024-12-13T14:12:39.127277723Z" level=info msg="CreateContainer within sandbox \"4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:12:39.152648 containerd[1482]: time="2024-12-13T14:12:39.152497250Z" level=info msg="CreateContainer within sandbox \"4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177\"" Dec 13 14:12:39.153291 containerd[1482]: time="2024-12-13T14:12:39.153219610Z" level=info msg="StartContainer for \"6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177\"" Dec 13 14:12:39.193381 systemd[1]: Started cri-containerd-6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177.scope - libcontainer container 6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177. Dec 13 14:12:39.235423 containerd[1482]: time="2024-12-13T14:12:39.235206912Z" level=info msg="StartContainer for \"6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177\" returns successfully" Dec 13 14:12:39.657537 kubelet[2030]: E1213 14:12:39.657489 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:39.752074 containerd[1482]: time="2024-12-13T14:12:39.751833688Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:12:39.755791 systemd[1]: cri-containerd-6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177.scope: Deactivated successfully. Dec 13 14:12:39.784172 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177-rootfs.mount: Deactivated successfully. Dec 13 14:12:39.847372 kubelet[2030]: I1213 14:12:39.847303 2030 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:12:39.940678 containerd[1482]: time="2024-12-13T14:12:39.940303497Z" level=info msg="shim disconnected" id=6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177 namespace=k8s.io Dec 13 14:12:39.940678 containerd[1482]: time="2024-12-13T14:12:39.940379817Z" level=warning msg="cleaning up after shim disconnected" id=6c366131374b438b7e3ced81fa660a39b277e5a6cb0b48ab2e55d2c4b99af177 namespace=k8s.io Dec 13 14:12:39.940678 containerd[1482]: time="2024-12-13T14:12:39.940393377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 14:12:40.658593 kubelet[2030]: E1213 14:12:40.658528 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:40.752449 systemd[1]: Created slice kubepods-besteffort-podc87203ee_871f_4f04_a281_8859d4bb2356.slice - libcontainer container kubepods-besteffort-podc87203ee_871f_4f04_a281_8859d4bb2356.slice. Dec 13 14:12:40.756640 containerd[1482]: time="2024-12-13T14:12:40.756268468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:0,}" Dec 13 14:12:40.796470 containerd[1482]: time="2024-12-13T14:12:40.796236959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:12:40.859494 containerd[1482]: time="2024-12-13T14:12:40.857772654Z" level=error msg="Failed to destroy network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:40.859310 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1-shm.mount: Deactivated successfully. Dec 13 14:12:40.861296 containerd[1482]: time="2024-12-13T14:12:40.861125775Z" level=error msg="encountered an error cleaning up failed sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:40.861435 containerd[1482]: time="2024-12-13T14:12:40.861340375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:40.861680 kubelet[2030]: E1213 14:12:40.861641 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:40.861790 kubelet[2030]: E1213 14:12:40.861743 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:40.861790 kubelet[2030]: E1213 14:12:40.861771 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:40.862151 kubelet[2030]: E1213 14:12:40.861870 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:41.659119 kubelet[2030]: E1213 14:12:41.659043 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:41.799877 kubelet[2030]: I1213 14:12:41.799364 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1" Dec 13 14:12:41.800076 containerd[1482]: time="2024-12-13T14:12:41.800021093Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" Dec 13 14:12:41.801829 containerd[1482]: time="2024-12-13T14:12:41.800209693Z" level=info msg="Ensure that sandbox 2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1 in task-service has been cleanup successfully" Dec 13 14:12:41.801652 systemd[1]: run-netns-cni\x2d126faf6d\x2df2de\x2d1fc9\x2d741b\x2d5b270b0319cf.mount: Deactivated successfully. Dec 13 14:12:41.803046 containerd[1482]: time="2024-12-13T14:12:41.802721254Z" level=info msg="TearDown network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" successfully" Dec 13 14:12:41.803046 containerd[1482]: time="2024-12-13T14:12:41.802755934Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" returns successfully" Dec 13 14:12:41.804151 containerd[1482]: time="2024-12-13T14:12:41.803445454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:1,}" Dec 13 14:12:41.870594 containerd[1482]: time="2024-12-13T14:12:41.870376351Z" level=error msg="Failed to destroy network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:41.871876 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081-shm.mount: Deactivated successfully. Dec 13 14:12:41.873337 containerd[1482]: time="2024-12-13T14:12:41.872917952Z" level=error msg="encountered an error cleaning up failed sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:41.873337 containerd[1482]: time="2024-12-13T14:12:41.873155472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:41.873756 kubelet[2030]: E1213 14:12:41.873364 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:41.873756 kubelet[2030]: E1213 14:12:41.873415 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:41.873756 kubelet[2030]: E1213 14:12:41.873436 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:41.873855 kubelet[2030]: E1213 14:12:41.873489 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:42.659475 kubelet[2030]: E1213 14:12:42.659401 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:42.808311 kubelet[2030]: I1213 14:12:42.807555 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081" Dec 13 14:12:42.808461 containerd[1482]: time="2024-12-13T14:12:42.808398704Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\"" Dec 13 14:12:42.808885 containerd[1482]: time="2024-12-13T14:12:42.808582744Z" level=info msg="Ensure that sandbox 42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081 in task-service has been cleanup successfully" Dec 13 14:12:42.810880 systemd[1]: run-netns-cni\x2db1a46ce1\x2d6ad5\x2de94f\x2d312b\x2decbdabca62c2.mount: Deactivated successfully. Dec 13 14:12:42.811568 containerd[1482]: time="2024-12-13T14:12:42.811510505Z" level=info msg="TearDown network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" successfully" Dec 13 14:12:42.811997 containerd[1482]: time="2024-12-13T14:12:42.811566585Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" returns successfully" Dec 13 14:12:42.813599 containerd[1482]: time="2024-12-13T14:12:42.813336305Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" Dec 13 14:12:42.813599 containerd[1482]: time="2024-12-13T14:12:42.813497825Z" level=info msg="TearDown network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" successfully" Dec 13 14:12:42.813599 containerd[1482]: time="2024-12-13T14:12:42.813510105Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" returns successfully" Dec 13 14:12:42.814819 containerd[1482]: time="2024-12-13T14:12:42.814417625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:2,}" Dec 13 14:12:42.884653 containerd[1482]: time="2024-12-13T14:12:42.884599883Z" level=error msg="Failed to destroy network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:42.886537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460-shm.mount: Deactivated successfully. Dec 13 14:12:42.888923 containerd[1482]: time="2024-12-13T14:12:42.888742124Z" level=error msg="encountered an error cleaning up failed sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:42.888923 containerd[1482]: time="2024-12-13T14:12:42.888840404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:42.890824 kubelet[2030]: E1213 14:12:42.890246 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:42.890824 kubelet[2030]: E1213 14:12:42.890332 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:42.890824 kubelet[2030]: E1213 14:12:42.890365 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:42.891234 kubelet[2030]: E1213 14:12:42.890504 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:43.660056 kubelet[2030]: E1213 14:12:43.659912 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:43.817212 kubelet[2030]: I1213 14:12:43.817127 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460" Dec 13 14:12:43.818049 containerd[1482]: time="2024-12-13T14:12:43.817984350Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\"" Dec 13 14:12:43.818407 containerd[1482]: time="2024-12-13T14:12:43.818174390Z" level=info msg="Ensure that sandbox 02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460 in task-service has been cleanup successfully" Dec 13 14:12:43.819822 containerd[1482]: time="2024-12-13T14:12:43.819719910Z" level=info msg="TearDown network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" successfully" Dec 13 14:12:43.819822 containerd[1482]: time="2024-12-13T14:12:43.819749990Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" returns successfully" Dec 13 14:12:43.820729 containerd[1482]: time="2024-12-13T14:12:43.820357390Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\"" Dec 13 14:12:43.820729 containerd[1482]: time="2024-12-13T14:12:43.820476510Z" level=info msg="TearDown network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" successfully" Dec 13 14:12:43.820729 containerd[1482]: time="2024-12-13T14:12:43.820492550Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" returns successfully" Dec 13 14:12:43.820848 systemd[1]: run-netns-cni\x2d52052e33\x2d767c\x2da14c\x2d6b71\x2d88b111cbbaf0.mount: Deactivated successfully. Dec 13 14:12:43.822060 containerd[1482]: time="2024-12-13T14:12:43.821347670Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" Dec 13 14:12:43.822060 containerd[1482]: time="2024-12-13T14:12:43.821463110Z" level=info msg="TearDown network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" successfully" Dec 13 14:12:43.822060 containerd[1482]: time="2024-12-13T14:12:43.821477510Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" returns successfully" Dec 13 14:12:43.822060 containerd[1482]: time="2024-12-13T14:12:43.822039311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:3,}" Dec 13 14:12:43.895917 containerd[1482]: time="2024-12-13T14:12:43.895791008Z" level=error msg="Failed to destroy network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:43.896522 containerd[1482]: time="2024-12-13T14:12:43.896395609Z" level=error msg="encountered an error cleaning up failed sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:43.896522 containerd[1482]: time="2024-12-13T14:12:43.896470489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:43.897661 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522-shm.mount: Deactivated successfully. Dec 13 14:12:43.898314 kubelet[2030]: E1213 14:12:43.896940 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:43.898507 kubelet[2030]: E1213 14:12:43.898426 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:43.898787 kubelet[2030]: E1213 14:12:43.898616 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:43.899147 kubelet[2030]: E1213 14:12:43.898877 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:44.660708 kubelet[2030]: E1213 14:12:44.660584 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:44.824008 kubelet[2030]: I1213 14:12:44.823439 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522" Dec 13 14:12:44.824355 containerd[1482]: time="2024-12-13T14:12:44.824319830Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\"" Dec 13 14:12:44.826605 containerd[1482]: time="2024-12-13T14:12:44.824867510Z" level=info msg="Ensure that sandbox 28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522 in task-service has been cleanup successfully" Dec 13 14:12:44.826398 systemd[1]: run-netns-cni\x2d9f1f76c5\x2d8008\x2d887a\x2d16f6\x2de98695021918.mount: Deactivated successfully. Dec 13 14:12:44.827375 containerd[1482]: time="2024-12-13T14:12:44.827196670Z" level=info msg="TearDown network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" successfully" Dec 13 14:12:44.827375 containerd[1482]: time="2024-12-13T14:12:44.827228390Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" returns successfully" Dec 13 14:12:44.828061 containerd[1482]: time="2024-12-13T14:12:44.827837790Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\"" Dec 13 14:12:44.828061 containerd[1482]: time="2024-12-13T14:12:44.828017190Z" level=info msg="TearDown network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" successfully" Dec 13 14:12:44.828253 containerd[1482]: time="2024-12-13T14:12:44.828030950Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" returns successfully" Dec 13 14:12:44.829106 containerd[1482]: time="2024-12-13T14:12:44.828882751Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\"" Dec 13 14:12:44.829106 containerd[1482]: time="2024-12-13T14:12:44.829032111Z" level=info msg="TearDown network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" successfully" Dec 13 14:12:44.829106 containerd[1482]: time="2024-12-13T14:12:44.829042711Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" returns successfully" Dec 13 14:12:44.829728 containerd[1482]: time="2024-12-13T14:12:44.829568671Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" Dec 13 14:12:44.829877 containerd[1482]: time="2024-12-13T14:12:44.829854551Z" level=info msg="TearDown network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" successfully" Dec 13 14:12:44.829877 containerd[1482]: time="2024-12-13T14:12:44.829874311Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" returns successfully" Dec 13 14:12:44.830748 containerd[1482]: time="2024-12-13T14:12:44.830377431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:4,}" Dec 13 14:12:44.914310 containerd[1482]: time="2024-12-13T14:12:44.914146771Z" level=error msg="Failed to destroy network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:44.915082 containerd[1482]: time="2024-12-13T14:12:44.915016291Z" level=error msg="encountered an error cleaning up failed sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:44.915188 containerd[1482]: time="2024-12-13T14:12:44.915140571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:44.917208 kubelet[2030]: E1213 14:12:44.917171 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:44.917332 kubelet[2030]: E1213 14:12:44.917238 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:44.917332 kubelet[2030]: E1213 14:12:44.917260 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:44.917332 kubelet[2030]: E1213 14:12:44.917314 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:44.918300 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032-shm.mount: Deactivated successfully. Dec 13 14:12:45.528117 kubelet[2030]: I1213 14:12:45.527911 2030 topology_manager.go:215] "Topology Admit Handler" podUID="82fa26dc-88d4-4915-b9ea-015157a0711f" podNamespace="default" podName="nginx-deployment-6d5f899847-pzbl9" Dec 13 14:12:45.537934 systemd[1]: Created slice kubepods-besteffort-pod82fa26dc_88d4_4915_b9ea_015157a0711f.slice - libcontainer container kubepods-besteffort-pod82fa26dc_88d4_4915_b9ea_015157a0711f.slice. Dec 13 14:12:45.596715 kubelet[2030]: I1213 14:12:45.596656 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md6wz\" (UniqueName: \"kubernetes.io/projected/82fa26dc-88d4-4915-b9ea-015157a0711f-kube-api-access-md6wz\") pod \"nginx-deployment-6d5f899847-pzbl9\" (UID: \"82fa26dc-88d4-4915-b9ea-015157a0711f\") " pod="default/nginx-deployment-6d5f899847-pzbl9" Dec 13 14:12:45.661102 kubelet[2030]: E1213 14:12:45.660975 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:45.833636 kubelet[2030]: I1213 14:12:45.833353 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032" Dec 13 14:12:45.835105 containerd[1482]: time="2024-12-13T14:12:45.835068266Z" level=info msg="StopPodSandbox for \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\"" Dec 13 14:12:45.835622 containerd[1482]: time="2024-12-13T14:12:45.835249306Z" level=info msg="Ensure that sandbox f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032 in task-service has been cleanup successfully" Dec 13 14:12:45.837262 systemd[1]: run-netns-cni\x2d16318df6\x2d1e88\x2d8d7a\x2d8e5d\x2d7a02384e8c3d.mount: Deactivated successfully. Dec 13 14:12:45.837929 containerd[1482]: time="2024-12-13T14:12:45.837555666Z" level=info msg="TearDown network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\" successfully" Dec 13 14:12:45.837929 containerd[1482]: time="2024-12-13T14:12:45.837583266Z" level=info msg="StopPodSandbox for \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\" returns successfully" Dec 13 14:12:45.838864 containerd[1482]: time="2024-12-13T14:12:45.838052187Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\"" Dec 13 14:12:45.838864 containerd[1482]: time="2024-12-13T14:12:45.838150787Z" level=info msg="TearDown network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" successfully" Dec 13 14:12:45.838864 containerd[1482]: time="2024-12-13T14:12:45.838161267Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" returns successfully" Dec 13 14:12:45.838864 containerd[1482]: time="2024-12-13T14:12:45.838514707Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\"" Dec 13 14:12:45.838864 containerd[1482]: time="2024-12-13T14:12:45.838697827Z" level=info msg="TearDown network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" successfully" Dec 13 14:12:45.838864 containerd[1482]: time="2024-12-13T14:12:45.838714267Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" returns successfully" Dec 13 14:12:45.840222 containerd[1482]: time="2024-12-13T14:12:45.839623307Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\"" Dec 13 14:12:45.840222 containerd[1482]: time="2024-12-13T14:12:45.839759787Z" level=info msg="TearDown network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" successfully" Dec 13 14:12:45.840222 containerd[1482]: time="2024-12-13T14:12:45.839772387Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" returns successfully" Dec 13 14:12:45.840338 containerd[1482]: time="2024-12-13T14:12:45.840322307Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" Dec 13 14:12:45.840516 containerd[1482]: time="2024-12-13T14:12:45.840395547Z" level=info msg="TearDown network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" successfully" Dec 13 14:12:45.840516 containerd[1482]: time="2024-12-13T14:12:45.840415027Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" returns successfully" Dec 13 14:12:45.842197 containerd[1482]: time="2024-12-13T14:12:45.841614227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pzbl9,Uid:82fa26dc-88d4-4915-b9ea-015157a0711f,Namespace:default,Attempt:0,}" Dec 13 14:12:45.842197 containerd[1482]: time="2024-12-13T14:12:45.841877707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:5,}" Dec 13 14:12:45.965592 containerd[1482]: time="2024-12-13T14:12:45.965532056Z" level=error msg="Failed to destroy network for sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:45.969166 containerd[1482]: time="2024-12-13T14:12:45.968551577Z" level=error msg="encountered an error cleaning up failed sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:45.969166 containerd[1482]: time="2024-12-13T14:12:45.968920537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pzbl9,Uid:82fa26dc-88d4-4915-b9ea-015157a0711f,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:45.969358 kubelet[2030]: E1213 14:12:45.969256 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:45.969358 kubelet[2030]: E1213 14:12:45.969315 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-pzbl9" Dec 13 14:12:45.969358 kubelet[2030]: E1213 14:12:45.969335 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-pzbl9" Dec 13 14:12:45.969446 kubelet[2030]: E1213 14:12:45.969384 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-pzbl9_default(82fa26dc-88d4-4915-b9ea-015157a0711f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-pzbl9_default(82fa26dc-88d4-4915-b9ea-015157a0711f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-pzbl9" podUID="82fa26dc-88d4-4915-b9ea-015157a0711f" Dec 13 14:12:45.973868 containerd[1482]: time="2024-12-13T14:12:45.973580938Z" level=error msg="Failed to destroy network for sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:45.975407 containerd[1482]: time="2024-12-13T14:12:45.975352658Z" level=error msg="encountered an error cleaning up failed sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:45.975509 containerd[1482]: time="2024-12-13T14:12:45.975434339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:45.976186 kubelet[2030]: E1213 14:12:45.976162 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:45.976277 kubelet[2030]: E1213 14:12:45.976214 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:45.976277 kubelet[2030]: E1213 14:12:45.976234 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:45.976340 kubelet[2030]: E1213 14:12:45.976280 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:46.642074 kubelet[2030]: E1213 14:12:46.642032 2030 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:46.662110 kubelet[2030]: E1213 14:12:46.662024 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:46.826698 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f-shm.mount: Deactivated successfully. Dec 13 14:12:46.827077 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9-shm.mount: Deactivated successfully. Dec 13 14:12:46.836404 kubelet[2030]: I1213 14:12:46.836339 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9" Dec 13 14:12:46.838427 containerd[1482]: time="2024-12-13T14:12:46.837942576Z" level=info msg="StopPodSandbox for \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\"" Dec 13 14:12:46.838427 containerd[1482]: time="2024-12-13T14:12:46.838134016Z" level=info msg="Ensure that sandbox ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9 in task-service has been cleanup successfully" Dec 13 14:12:46.840638 containerd[1482]: time="2024-12-13T14:12:46.838611016Z" level=info msg="TearDown network for sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\" successfully" Dec 13 14:12:46.840638 containerd[1482]: time="2024-12-13T14:12:46.838629336Z" level=info msg="StopPodSandbox for \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\" returns successfully" Dec 13 14:12:46.840713 systemd[1]: run-netns-cni\x2dd5de359f\x2d0af7\x2db76a\x2dac92\x2df10c64aeecc1.mount: Deactivated successfully. Dec 13 14:12:46.843228 containerd[1482]: time="2024-12-13T14:12:46.842998617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pzbl9,Uid:82fa26dc-88d4-4915-b9ea-015157a0711f,Namespace:default,Attempt:1,}" Dec 13 14:12:46.846716 kubelet[2030]: I1213 14:12:46.846685 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f" Dec 13 14:12:46.848898 containerd[1482]: time="2024-12-13T14:12:46.848752778Z" level=info msg="StopPodSandbox for \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\"" Dec 13 14:12:46.849365 containerd[1482]: time="2024-12-13T14:12:46.849335378Z" level=info msg="Ensure that sandbox f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f in task-service has been cleanup successfully" Dec 13 14:12:46.849706 containerd[1482]: time="2024-12-13T14:12:46.849529858Z" level=info msg="TearDown network for sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\" successfully" Dec 13 14:12:46.849706 containerd[1482]: time="2024-12-13T14:12:46.849547418Z" level=info msg="StopPodSandbox for \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\" returns successfully" Dec 13 14:12:46.851264 systemd[1]: run-netns-cni\x2d95fd11ca\x2d96c4\x2d6118\x2db769\x2d84c6ec3d6932.mount: Deactivated successfully. Dec 13 14:12:46.852127 containerd[1482]: time="2024-12-13T14:12:46.852086779Z" level=info msg="StopPodSandbox for \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\"" Dec 13 14:12:46.852267 containerd[1482]: time="2024-12-13T14:12:46.852248819Z" level=info msg="TearDown network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\" successfully" Dec 13 14:12:46.852300 containerd[1482]: time="2024-12-13T14:12:46.852266139Z" level=info msg="StopPodSandbox for \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\" returns successfully" Dec 13 14:12:46.854619 containerd[1482]: time="2024-12-13T14:12:46.854462339Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\"" Dec 13 14:12:46.854619 containerd[1482]: time="2024-12-13T14:12:46.854566739Z" level=info msg="TearDown network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" successfully" Dec 13 14:12:46.854619 containerd[1482]: time="2024-12-13T14:12:46.854578019Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" returns successfully" Dec 13 14:12:46.855739 containerd[1482]: time="2024-12-13T14:12:46.855118459Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\"" Dec 13 14:12:46.855739 containerd[1482]: time="2024-12-13T14:12:46.855190339Z" level=info msg="TearDown network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" successfully" Dec 13 14:12:46.855739 containerd[1482]: time="2024-12-13T14:12:46.855199899Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" returns successfully" Dec 13 14:12:46.855739 containerd[1482]: time="2024-12-13T14:12:46.855453940Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\"" Dec 13 14:12:46.855739 containerd[1482]: time="2024-12-13T14:12:46.855529420Z" level=info msg="TearDown network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" successfully" Dec 13 14:12:46.855739 containerd[1482]: time="2024-12-13T14:12:46.855538580Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" returns successfully" Dec 13 14:12:46.856487 containerd[1482]: time="2024-12-13T14:12:46.856193260Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" Dec 13 14:12:46.856487 containerd[1482]: time="2024-12-13T14:12:46.856275100Z" level=info msg="TearDown network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" successfully" Dec 13 14:12:46.856487 containerd[1482]: time="2024-12-13T14:12:46.856285500Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" returns successfully" Dec 13 14:12:46.856891 containerd[1482]: time="2024-12-13T14:12:46.856849700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:6,}" Dec 13 14:12:46.969574 containerd[1482]: time="2024-12-13T14:12:46.969350926Z" level=error msg="Failed to destroy network for sandbox \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:46.972289 containerd[1482]: time="2024-12-13T14:12:46.972245726Z" level=error msg="encountered an error cleaning up failed sandbox \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:46.972927 containerd[1482]: time="2024-12-13T14:12:46.972869606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pzbl9,Uid:82fa26dc-88d4-4915-b9ea-015157a0711f,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:46.973333 kubelet[2030]: E1213 14:12:46.973306 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:46.973418 kubelet[2030]: E1213 14:12:46.973371 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-pzbl9" Dec 13 14:12:46.973418 kubelet[2030]: E1213 14:12:46.973393 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-pzbl9" Dec 13 14:12:46.973485 kubelet[2030]: E1213 14:12:46.973454 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-pzbl9_default(82fa26dc-88d4-4915-b9ea-015157a0711f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-pzbl9_default(82fa26dc-88d4-4915-b9ea-015157a0711f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-pzbl9" podUID="82fa26dc-88d4-4915-b9ea-015157a0711f" Dec 13 14:12:46.984608 containerd[1482]: time="2024-12-13T14:12:46.984322929Z" level=error msg="Failed to destroy network for sandbox \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:46.985965 containerd[1482]: time="2024-12-13T14:12:46.985714689Z" level=error msg="encountered an error cleaning up failed sandbox \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:46.986224 containerd[1482]: time="2024-12-13T14:12:46.985876649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:46.987086 kubelet[2030]: E1213 14:12:46.987056 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:46.987184 kubelet[2030]: E1213 14:12:46.987122 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:46.987184 kubelet[2030]: E1213 14:12:46.987147 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:46.987233 kubelet[2030]: E1213 14:12:46.987205 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:47.663056 kubelet[2030]: E1213 14:12:47.662994 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:47.827495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090-shm.mount: Deactivated successfully. Dec 13 14:12:47.827600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb-shm.mount: Deactivated successfully. Dec 13 14:12:47.851087 kubelet[2030]: I1213 14:12:47.849865 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb" Dec 13 14:12:47.851271 containerd[1482]: time="2024-12-13T14:12:47.850705843Z" level=info msg="StopPodSandbox for \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\"" Dec 13 14:12:47.851271 containerd[1482]: time="2024-12-13T14:12:47.850873323Z" level=info msg="Ensure that sandbox f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb in task-service has been cleanup successfully" Dec 13 14:12:47.851720 containerd[1482]: time="2024-12-13T14:12:47.851682483Z" level=info msg="TearDown network for sandbox \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\" successfully" Dec 13 14:12:47.851720 containerd[1482]: time="2024-12-13T14:12:47.851711643Z" level=info msg="StopPodSandbox for \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\" returns successfully" Dec 13 14:12:47.853289 systemd[1]: run-netns-cni\x2d97aab318\x2d4909\x2d8f86\x2d493f\x2d7cdfb55faa34.mount: Deactivated successfully. Dec 13 14:12:47.853629 containerd[1482]: time="2024-12-13T14:12:47.853586244Z" level=info msg="StopPodSandbox for \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\"" Dec 13 14:12:47.853714 containerd[1482]: time="2024-12-13T14:12:47.853699444Z" level=info msg="TearDown network for sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\" successfully" Dec 13 14:12:47.853714 containerd[1482]: time="2024-12-13T14:12:47.853709924Z" level=info msg="StopPodSandbox for \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\" returns successfully" Dec 13 14:12:47.856239 containerd[1482]: time="2024-12-13T14:12:47.856155764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pzbl9,Uid:82fa26dc-88d4-4915-b9ea-015157a0711f,Namespace:default,Attempt:2,}" Dec 13 14:12:47.861097 kubelet[2030]: I1213 14:12:47.860867 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090" Dec 13 14:12:47.863479 containerd[1482]: time="2024-12-13T14:12:47.861868885Z" level=info msg="StopPodSandbox for \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\"" Dec 13 14:12:47.863479 containerd[1482]: time="2024-12-13T14:12:47.862105565Z" level=info msg="Ensure that sandbox 5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090 in task-service has been cleanup successfully" Dec 13 14:12:47.863906 systemd[1]: run-netns-cni\x2dfa6a8596\x2d97ee\x2dcb4e\x2d42f6\x2dd974fc4f5a15.mount: Deactivated successfully. Dec 13 14:12:47.864592 containerd[1482]: time="2024-12-13T14:12:47.864316606Z" level=info msg="TearDown network for sandbox \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\" successfully" Dec 13 14:12:47.864592 containerd[1482]: time="2024-12-13T14:12:47.864348806Z" level=info msg="StopPodSandbox for \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\" returns successfully" Dec 13 14:12:47.867028 containerd[1482]: time="2024-12-13T14:12:47.866853847Z" level=info msg="StopPodSandbox for \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\"" Dec 13 14:12:47.867745 containerd[1482]: time="2024-12-13T14:12:47.867499247Z" level=info msg="TearDown network for sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\" successfully" Dec 13 14:12:47.867862 containerd[1482]: time="2024-12-13T14:12:47.867735207Z" level=info msg="StopPodSandbox for \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\" returns successfully" Dec 13 14:12:47.868808 containerd[1482]: time="2024-12-13T14:12:47.868480367Z" level=info msg="StopPodSandbox for \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\"" Dec 13 14:12:47.868808 containerd[1482]: time="2024-12-13T14:12:47.868633167Z" level=info msg="TearDown network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\" successfully" Dec 13 14:12:47.868808 containerd[1482]: time="2024-12-13T14:12:47.868659967Z" level=info msg="StopPodSandbox for \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\" returns successfully" Dec 13 14:12:47.869846 containerd[1482]: time="2024-12-13T14:12:47.869119167Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\"" Dec 13 14:12:47.869846 containerd[1482]: time="2024-12-13T14:12:47.869207287Z" level=info msg="TearDown network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" successfully" Dec 13 14:12:47.869846 containerd[1482]: time="2024-12-13T14:12:47.869218567Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" returns successfully" Dec 13 14:12:47.869846 containerd[1482]: time="2024-12-13T14:12:47.869552807Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\"" Dec 13 14:12:47.869846 containerd[1482]: time="2024-12-13T14:12:47.869618047Z" level=info msg="TearDown network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" successfully" Dec 13 14:12:47.869846 containerd[1482]: time="2024-12-13T14:12:47.869627207Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" returns successfully" Dec 13 14:12:47.871684 containerd[1482]: time="2024-12-13T14:12:47.871625128Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\"" Dec 13 14:12:47.871776 containerd[1482]: time="2024-12-13T14:12:47.871734928Z" level=info msg="TearDown network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" successfully" Dec 13 14:12:47.871776 containerd[1482]: time="2024-12-13T14:12:47.871746288Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" returns successfully" Dec 13 14:12:47.872157 containerd[1482]: time="2024-12-13T14:12:47.872117128Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" Dec 13 14:12:47.872351 containerd[1482]: time="2024-12-13T14:12:47.872296608Z" level=info msg="TearDown network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" successfully" Dec 13 14:12:47.872351 containerd[1482]: time="2024-12-13T14:12:47.872314168Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" returns successfully" Dec 13 14:12:47.872859 containerd[1482]: time="2024-12-13T14:12:47.872819328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:7,}" Dec 13 14:12:47.988178 containerd[1482]: time="2024-12-13T14:12:47.987348034Z" level=error msg="Failed to destroy network for sandbox \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:47.989849 containerd[1482]: time="2024-12-13T14:12:47.989659354Z" level=error msg="encountered an error cleaning up failed sandbox \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:47.989849 containerd[1482]: time="2024-12-13T14:12:47.989738074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pzbl9,Uid:82fa26dc-88d4-4915-b9ea-015157a0711f,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:47.990307 kubelet[2030]: E1213 14:12:47.990179 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:47.990307 kubelet[2030]: E1213 14:12:47.990238 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-pzbl9" Dec 13 14:12:47.990307 kubelet[2030]: E1213 14:12:47.990259 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-pzbl9" Dec 13 14:12:47.990626 kubelet[2030]: E1213 14:12:47.990318 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-pzbl9_default(82fa26dc-88d4-4915-b9ea-015157a0711f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-pzbl9_default(82fa26dc-88d4-4915-b9ea-015157a0711f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-pzbl9" podUID="82fa26dc-88d4-4915-b9ea-015157a0711f" Dec 13 14:12:48.010699 containerd[1482]: time="2024-12-13T14:12:48.010609079Z" level=error msg="Failed to destroy network for sandbox \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:48.011112 containerd[1482]: time="2024-12-13T14:12:48.011067999Z" level=error msg="encountered an error cleaning up failed sandbox \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:48.011181 containerd[1482]: time="2024-12-13T14:12:48.011146719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:48.011750 kubelet[2030]: E1213 14:12:48.011381 2030 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:12:48.011750 kubelet[2030]: E1213 14:12:48.011439 2030 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:48.011750 kubelet[2030]: E1213 14:12:48.011459 2030 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jd54l" Dec 13 14:12:48.011913 kubelet[2030]: E1213 14:12:48.011518 2030 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jd54l_calico-system(c87203ee-871f-4f04-a281-8859d4bb2356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jd54l" podUID="c87203ee-871f-4f04-a281-8859d4bb2356" Dec 13 14:12:48.167215 containerd[1482]: time="2024-12-13T14:12:48.167151313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:48.168111 containerd[1482]: time="2024-12-13T14:12:48.168016993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 14:12:48.168899 containerd[1482]: time="2024-12-13T14:12:48.168850393Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:48.171370 containerd[1482]: time="2024-12-13T14:12:48.171308634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:48.172837 containerd[1482]: time="2024-12-13T14:12:48.171990354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.375651875s" Dec 13 14:12:48.172837 containerd[1482]: time="2024-12-13T14:12:48.172027394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 14:12:48.179776 containerd[1482]: time="2024-12-13T14:12:48.179592036Z" level=info msg="CreateContainer within sandbox \"4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:12:48.195993 containerd[1482]: time="2024-12-13T14:12:48.195917679Z" level=info msg="CreateContainer within sandbox \"4dc7a1c163c32b00dbe38e9f9f5e82c0f1fc82d44845506b69f3b9f5d4070917\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f62e5859b39786d8297a687d673f0dd732919b26fd5e5626f957eaa51fe55a3e\"" Dec 13 14:12:48.197091 containerd[1482]: time="2024-12-13T14:12:48.197055720Z" level=info msg="StartContainer for \"f62e5859b39786d8297a687d673f0dd732919b26fd5e5626f957eaa51fe55a3e\"" Dec 13 14:12:48.225145 systemd[1]: Started cri-containerd-f62e5859b39786d8297a687d673f0dd732919b26fd5e5626f957eaa51fe55a3e.scope - libcontainer container f62e5859b39786d8297a687d673f0dd732919b26fd5e5626f957eaa51fe55a3e. Dec 13 14:12:48.263353 containerd[1482]: time="2024-12-13T14:12:48.263305454Z" level=info msg="StartContainer for \"f62e5859b39786d8297a687d673f0dd732919b26fd5e5626f957eaa51fe55a3e\" returns successfully" Dec 13 14:12:48.397055 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:12:48.397285 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 14:12:48.664460 kubelet[2030]: E1213 14:12:48.664317 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:48.831993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94-shm.mount: Deactivated successfully. Dec 13 14:12:48.832089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f-shm.mount: Deactivated successfully. Dec 13 14:12:48.832139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327452794.mount: Deactivated successfully. Dec 13 14:12:48.868979 kubelet[2030]: I1213 14:12:48.865478 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f" Dec 13 14:12:48.869103 containerd[1482]: time="2024-12-13T14:12:48.866318107Z" level=info msg="StopPodSandbox for \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\"" Dec 13 14:12:48.869103 containerd[1482]: time="2024-12-13T14:12:48.866573667Z" level=info msg="Ensure that sandbox 12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f in task-service has been cleanup successfully" Dec 13 14:12:48.869597 systemd[1]: run-netns-cni\x2d496667c2\x2decbf\x2dd717\x2da717\x2d1973d3cc5272.mount: Deactivated successfully. Dec 13 14:12:48.870898 containerd[1482]: time="2024-12-13T14:12:48.870808348Z" level=info msg="TearDown network for sandbox \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\" successfully" Dec 13 14:12:48.870898 containerd[1482]: time="2024-12-13T14:12:48.870836228Z" level=info msg="StopPodSandbox for \"12c4dd55450802c26205961d70eb3158d178064ce95c5149987191740c499a6f\" returns successfully" Dec 13 14:12:48.872490 containerd[1482]: time="2024-12-13T14:12:48.872024468Z" level=info msg="StopPodSandbox for \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\"" Dec 13 14:12:48.872490 containerd[1482]: time="2024-12-13T14:12:48.872128028Z" level=info msg="TearDown network for sandbox \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\" successfully" Dec 13 14:12:48.872490 containerd[1482]: time="2024-12-13T14:12:48.872138108Z" level=info msg="StopPodSandbox for \"f2be24e5786fb32a7058ab9705f30bcfc3e4a4026cab10f58ff02551fa1fcfbb\" returns successfully" Dec 13 14:12:48.874675 containerd[1482]: time="2024-12-13T14:12:48.874023388Z" level=info msg="StopPodSandbox for \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\"" Dec 13 14:12:48.874675 containerd[1482]: time="2024-12-13T14:12:48.874147828Z" level=info msg="TearDown network for sandbox \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\" successfully" Dec 13 14:12:48.874675 containerd[1482]: time="2024-12-13T14:12:48.874157468Z" level=info msg="StopPodSandbox for \"ad4a6cefb50f364bc6450ed28048a68c06963dd0abf84017eb7d948faa9638a9\" returns successfully" Dec 13 14:12:48.874870 containerd[1482]: time="2024-12-13T14:12:48.874831988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pzbl9,Uid:82fa26dc-88d4-4915-b9ea-015157a0711f,Namespace:default,Attempt:3,}" Dec 13 14:12:48.887322 kubelet[2030]: I1213 14:12:48.887292 2030 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94" Dec 13 14:12:48.888908 containerd[1482]: time="2024-12-13T14:12:48.888868192Z" level=info msg="StopPodSandbox for \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\"" Dec 13 14:12:48.889083 containerd[1482]: time="2024-12-13T14:12:48.889060472Z" level=info msg="Ensure that sandbox 6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94 in task-service has been cleanup successfully" Dec 13 14:12:48.890067 containerd[1482]: time="2024-12-13T14:12:48.890023552Z" level=info msg="TearDown network for sandbox \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\" successfully" Dec 13 14:12:48.890067 containerd[1482]: time="2024-12-13T14:12:48.890053432Z" level=info msg="StopPodSandbox for \"6e8f6e27da702579050499154d39eec7f6e0a2e5abc026a24b527db740ca7d94\" returns successfully" Dec 13 14:12:48.891624 containerd[1482]: time="2024-12-13T14:12:48.891560352Z" level=info msg="StopPodSandbox for \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\"" Dec 13 14:12:48.891748 containerd[1482]: time="2024-12-13T14:12:48.891690232Z" level=info msg="TearDown network for sandbox \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\" successfully" Dec 13 14:12:48.891748 containerd[1482]: time="2024-12-13T14:12:48.891702032Z" level=info msg="StopPodSandbox for \"5de6340ea65e2dc203f23ca3cb0a3398ae806e317057eee62e1ce3d1ba27a090\" returns successfully" Dec 13 14:12:48.892201 systemd[1]: run-netns-cni\x2d3d2a0bc1\x2da4d2\x2d0660\x2d7eb1\x2d9b77378d38dd.mount: Deactivated successfully. Dec 13 14:12:48.894006 containerd[1482]: time="2024-12-13T14:12:48.893354872Z" level=info msg="StopPodSandbox for \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\"" Dec 13 14:12:48.894006 containerd[1482]: time="2024-12-13T14:12:48.893462073Z" level=info msg="TearDown network for sandbox \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\" successfully" Dec 13 14:12:48.894006 containerd[1482]: time="2024-12-13T14:12:48.893474353Z" level=info msg="StopPodSandbox for \"f15d278cf25295f7bb57201d18117add02b4aa17ff4a7eac547938ff5071a09f\" returns successfully" Dec 13 14:12:48.895191 containerd[1482]: time="2024-12-13T14:12:48.895163353Z" level=info msg="StopPodSandbox for \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\"" Dec 13 14:12:48.895398 containerd[1482]: time="2024-12-13T14:12:48.895381833Z" level=info msg="TearDown network for sandbox \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\" successfully" Dec 13 14:12:48.895486 containerd[1482]: time="2024-12-13T14:12:48.895470873Z" level=info msg="StopPodSandbox for \"f2e69f4e9e841249bc19fd13ed694672404181d1d1478b168b25ef9c47b6f032\" returns successfully" Dec 13 14:12:48.898433 containerd[1482]: time="2024-12-13T14:12:48.898393074Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\"" Dec 13 14:12:48.899223 containerd[1482]: time="2024-12-13T14:12:48.898506434Z" level=info msg="TearDown network for sandbox \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" successfully" Dec 13 14:12:48.899223 containerd[1482]: time="2024-12-13T14:12:48.898519514Z" level=info msg="StopPodSandbox for \"28bc2f177cfa743a515377b291cc11f9e4fb0051dc5a666a4586c71545852522\" returns successfully" Dec 13 14:12:48.899223 containerd[1482]: time="2024-12-13T14:12:48.898906634Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\"" Dec 13 14:12:48.899223 containerd[1482]: time="2024-12-13T14:12:48.899109514Z" level=info msg="TearDown network for sandbox \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" successfully" Dec 13 14:12:48.899223 containerd[1482]: time="2024-12-13T14:12:48.899123754Z" level=info msg="StopPodSandbox for \"02eb85062a74d54edad46261b025b63093d56d6ec4e8da919732c6f671f5e460\" returns successfully" Dec 13 14:12:48.905977 containerd[1482]: time="2024-12-13T14:12:48.905342635Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\"" Dec 13 14:12:48.905977 containerd[1482]: time="2024-12-13T14:12:48.905484635Z" level=info msg="TearDown network for sandbox \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" successfully" Dec 13 14:12:48.905977 containerd[1482]: time="2024-12-13T14:12:48.905499435Z" level=info msg="StopPodSandbox for \"42b14a1a63b350110ba68f7c30f00138610f0f9e0ef6ad261d04165fbd46d081\" returns successfully" Dec 13 14:12:48.906131 kubelet[2030]: I1213 14:12:48.905180 2030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-nmcd6" podStartSLOduration=4.626266923 podStartE2EDuration="22.905029915s" podCreationTimestamp="2024-12-13 14:12:26 +0000 UTC" firstStartedPulling="2024-12-13 14:12:29.893486202 +0000 UTC m=+3.702297065" lastFinishedPulling="2024-12-13 14:12:48.172249194 +0000 UTC m=+21.981060057" observedRunningTime="2024-12-13 14:12:48.904732515 +0000 UTC m=+22.713543378" watchObservedRunningTime="2024-12-13 14:12:48.905029915 +0000 UTC m=+22.713840778" Dec 13 14:12:48.906795 containerd[1482]: time="2024-12-13T14:12:48.906313075Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\"" Dec 13 14:12:48.906795 containerd[1482]: time="2024-12-13T14:12:48.906410315Z" level=info msg="TearDown network for sandbox \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" successfully" Dec 13 14:12:48.906795 containerd[1482]: time="2024-12-13T14:12:48.906421515Z" level=info msg="StopPodSandbox for \"2f67d2e5d537d28f576ebf6a64b506b61c1d79251a7e8ee9db7ce7a7109296c1\" returns successfully" Dec 13 14:12:48.907190 containerd[1482]: time="2024-12-13T14:12:48.907163996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:8,}" Dec 13 14:12:49.132913 systemd-networkd[1365]: calia8b3f5126f2: Link UP Dec 13 14:12:49.134768 systemd-networkd[1365]: calia8b3f5126f2: Gained carrier Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:48.962 [INFO][2963] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:48.991 [INFO][2963] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0 nginx-deployment-6d5f899847- default 82fa26dc-88d4-4915-b9ea-015157a0711f 1537 0 2024-12-13 14:12:45 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-6d5f899847-pzbl9 eth0 default [] [] [kns.default ksa.default.default] calia8b3f5126f2 [] []}} ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Namespace="default" Pod="nginx-deployment-6d5f899847-pzbl9" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:48.992 [INFO][2963] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Namespace="default" Pod="nginx-deployment-6d5f899847-pzbl9" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.053 [INFO][3001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" HandleID="k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.073 [INFO][3001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" HandleID="k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400064e340), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-6d5f899847-pzbl9", "timestamp":"2024-12-13 14:12:49.053091347 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.073 [INFO][3001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.073 [INFO][3001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.074 [INFO][3001] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.079 [INFO][3001] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.089 [INFO][3001] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.097 [INFO][3001] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.099 [INFO][3001] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.103 [INFO][3001] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.103 [INFO][3001] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.105 [INFO][3001] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94 Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.112 [INFO][3001] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.120 [INFO][3001] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.120 [INFO][3001] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" host="10.0.0.4" Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.120 [INFO][3001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:12:49.144238 containerd[1482]: 2024-12-13 14:12:49.121 [INFO][3001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" HandleID="k8s-pod-network.bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" Dec 13 14:12:49.146696 containerd[1482]: 2024-12-13 14:12:49.124 [INFO][2963] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Namespace="default" Pod="nginx-deployment-6d5f899847-pzbl9" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"82fa26dc-88d4-4915-b9ea-015157a0711f", ResourceVersion:"1537", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-6d5f899847-pzbl9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia8b3f5126f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:12:49.146696 containerd[1482]: 2024-12-13 14:12:49.124 [INFO][2963] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Namespace="default" Pod="nginx-deployment-6d5f899847-pzbl9" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" Dec 13 14:12:49.146696 containerd[1482]: 2024-12-13 14:12:49.124 [INFO][2963] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8b3f5126f2 ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Namespace="default" Pod="nginx-deployment-6d5f899847-pzbl9" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" Dec 13 14:12:49.146696 containerd[1482]: 2024-12-13 14:12:49.131 [INFO][2963] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Namespace="default" Pod="nginx-deployment-6d5f899847-pzbl9" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" Dec 13 14:12:49.146696 containerd[1482]: 2024-12-13 14:12:49.131 [INFO][2963] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Namespace="default" Pod="nginx-deployment-6d5f899847-pzbl9" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"82fa26dc-88d4-4915-b9ea-015157a0711f", ResourceVersion:"1537", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94", Pod:"nginx-deployment-6d5f899847-pzbl9", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia8b3f5126f2", MAC:"c2:ba:1a:2f:a2:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:12:49.146696 containerd[1482]: 2024-12-13 14:12:49.141 [INFO][2963] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94" Namespace="default" Pod="nginx-deployment-6d5f899847-pzbl9" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--pzbl9-eth0" Dec 13 14:12:49.173111 containerd[1482]: time="2024-12-13T14:12:49.172838213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:12:49.173111 containerd[1482]: time="2024-12-13T14:12:49.172905053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:12:49.173111 containerd[1482]: time="2024-12-13T14:12:49.172921413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:12:49.173111 containerd[1482]: time="2024-12-13T14:12:49.173039293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:12:49.184470 systemd-networkd[1365]: cali6252ae63ea1: Link UP Dec 13 14:12:49.186029 systemd-networkd[1365]: cali6252ae63ea1: Gained carrier Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:48.963 [INFO][2982] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:48.991 [INFO][2982] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--jd54l-eth0 csi-node-driver- calico-system c87203ee-871f-4f04-a281-8859d4bb2356 1448 0 2024-12-13 14:12:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-jd54l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6252ae63ea1 [] []}} ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Namespace="calico-system" Pod="csi-node-driver-jd54l" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--jd54l-" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:48.992 [INFO][2982] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Namespace="calico-system" Pod="csi-node-driver-jd54l" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.053 [INFO][3006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" HandleID="k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Workload="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.078 [INFO][3006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" HandleID="k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Workload="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317940), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-jd54l", "timestamp":"2024-12-13 14:12:49.053095547 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.078 [INFO][3006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.121 [INFO][3006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.121 [INFO][3006] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.123 [INFO][3006] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.133 [INFO][3006] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.147 [INFO][3006] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.151 [INFO][3006] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.154 [INFO][3006] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.154 [INFO][3006] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.160 [INFO][3006] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510 Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.168 [INFO][3006] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.176 [INFO][3006] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.177 [INFO][3006] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" host="10.0.0.4" Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.177 [INFO][3006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:12:49.205207 containerd[1482]: 2024-12-13 14:12:49.177 [INFO][3006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" HandleID="k8s-pod-network.fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Workload="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" Dec 13 14:12:49.205875 containerd[1482]: 2024-12-13 14:12:49.180 [INFO][2982] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Namespace="calico-system" Pod="csi-node-driver-jd54l" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--jd54l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c87203ee-871f-4f04-a281-8859d4bb2356", ResourceVersion:"1448", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 12, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-jd54l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6252ae63ea1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:12:49.205875 containerd[1482]: 2024-12-13 14:12:49.180 [INFO][2982] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Namespace="calico-system" Pod="csi-node-driver-jd54l" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" Dec 13 14:12:49.205875 containerd[1482]: 2024-12-13 14:12:49.180 [INFO][2982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6252ae63ea1 ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Namespace="calico-system" Pod="csi-node-driver-jd54l" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" Dec 13 14:12:49.205875 containerd[1482]: 2024-12-13 14:12:49.185 [INFO][2982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Namespace="calico-system" Pod="csi-node-driver-jd54l" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" Dec 13 14:12:49.205875 containerd[1482]: 2024-12-13 14:12:49.187 [INFO][2982] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Namespace="calico-system" Pod="csi-node-driver-jd54l" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--jd54l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c87203ee-871f-4f04-a281-8859d4bb2356", ResourceVersion:"1448", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 12, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510", Pod:"csi-node-driver-jd54l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6252ae63ea1", MAC:"7e:38:3a:d7:97:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:12:49.205875 containerd[1482]: 2024-12-13 14:12:49.196 [INFO][2982] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510" Namespace="calico-system" Pod="csi-node-driver-jd54l" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--jd54l-eth0" Dec 13 14:12:49.206208 systemd[1]: Started cri-containerd-bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94.scope - libcontainer container bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94. Dec 13 14:12:49.240185 containerd[1482]: time="2024-12-13T14:12:49.239900908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:12:49.240185 containerd[1482]: time="2024-12-13T14:12:49.239990148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:12:49.240185 containerd[1482]: time="2024-12-13T14:12:49.240007228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:12:49.240185 containerd[1482]: time="2024-12-13T14:12:49.240084188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:12:49.249993 containerd[1482]: time="2024-12-13T14:12:49.249739990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pzbl9,Uid:82fa26dc-88d4-4915-b9ea-015157a0711f,Namespace:default,Attempt:3,} returns sandbox id \"bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94\"" Dec 13 14:12:49.252574 containerd[1482]: time="2024-12-13T14:12:49.252297110Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:12:49.264172 systemd[1]: Started cri-containerd-fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510.scope - libcontainer container fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510. Dec 13 14:12:49.294893 containerd[1482]: time="2024-12-13T14:12:49.294834359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jd54l,Uid:c87203ee-871f-4f04-a281-8859d4bb2356,Namespace:calico-system,Attempt:8,} returns sandbox id \"fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510\"" Dec 13 14:12:49.665478 kubelet[2030]: E1213 14:12:49.665384 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:50.068996 kernel: bpftool[3258]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 14:12:50.262450 systemd-networkd[1365]: vxlan.calico: Link UP Dec 13 14:12:50.262468 systemd-networkd[1365]: vxlan.calico: Gained carrier Dec 13 14:12:50.289297 systemd-networkd[1365]: cali6252ae63ea1: Gained IPv6LL Dec 13 14:12:50.482196 systemd-networkd[1365]: calia8b3f5126f2: Gained IPv6LL Dec 13 14:12:50.665689 kubelet[2030]: E1213 14:12:50.665637 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:51.666620 kubelet[2030]: E1213 14:12:51.665895 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:52.056298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218352044.mount: Deactivated successfully. Dec 13 14:12:52.081130 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Dec 13 14:12:52.667089 kubelet[2030]: E1213 14:12:52.667043 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:52.877841 containerd[1482]: time="2024-12-13T14:12:52.876475389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:52.879125 containerd[1482]: time="2024-12-13T14:12:52.879054669Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 14:12:52.880500 containerd[1482]: time="2024-12-13T14:12:52.880423110Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:52.886992 containerd[1482]: time="2024-12-13T14:12:52.885808191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:52.887236 containerd[1482]: time="2024-12-13T14:12:52.887198751Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 3.634858081s" Dec 13 14:12:52.887321 containerd[1482]: time="2024-12-13T14:12:52.887305351Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:12:52.888252 containerd[1482]: time="2024-12-13T14:12:52.888227591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:12:52.890476 containerd[1482]: time="2024-12-13T14:12:52.890442792Z" level=info msg="CreateContainer within sandbox \"bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:12:52.925802 containerd[1482]: time="2024-12-13T14:12:52.925071919Z" level=info msg="CreateContainer within sandbox \"bade0e315bb789a471e6788d8c86df1aaec1515a3c12ef5b6913a1a4ffc2bb94\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"963ab5bd78b2dcb1a45166a6504d0271163668c2508b56318280a2b081dcf5aa\"" Dec 13 14:12:52.926458 containerd[1482]: time="2024-12-13T14:12:52.926321319Z" level=info msg="StartContainer for \"963ab5bd78b2dcb1a45166a6504d0271163668c2508b56318280a2b081dcf5aa\"" Dec 13 14:12:52.964246 systemd[1]: Started cri-containerd-963ab5bd78b2dcb1a45166a6504d0271163668c2508b56318280a2b081dcf5aa.scope - libcontainer container 963ab5bd78b2dcb1a45166a6504d0271163668c2508b56318280a2b081dcf5aa. Dec 13 14:12:52.998438 containerd[1482]: time="2024-12-13T14:12:52.998272854Z" level=info msg="StartContainer for \"963ab5bd78b2dcb1a45166a6504d0271163668c2508b56318280a2b081dcf5aa\" returns successfully" Dec 13 14:12:53.667847 kubelet[2030]: E1213 14:12:53.667723 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:53.934426 kubelet[2030]: I1213 14:12:53.934215 2030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-pzbl9" podStartSLOduration=5.29802416 podStartE2EDuration="8.934117961s" podCreationTimestamp="2024-12-13 14:12:45 +0000 UTC" firstStartedPulling="2024-12-13 14:12:49.25175675 +0000 UTC m=+23.060567613" lastFinishedPulling="2024-12-13 14:12:52.887850551 +0000 UTC m=+26.696661414" observedRunningTime="2024-12-13 14:12:53.934079881 +0000 UTC m=+27.742890784" watchObservedRunningTime="2024-12-13 14:12:53.934117961 +0000 UTC m=+27.742928864" Dec 13 14:12:54.667962 kubelet[2030]: E1213 14:12:54.667920 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:54.697064 containerd[1482]: time="2024-12-13T14:12:54.697002631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:54.698561 containerd[1482]: time="2024-12-13T14:12:54.698296151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 14:12:54.700075 containerd[1482]: time="2024-12-13T14:12:54.699810191Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:54.702654 containerd[1482]: time="2024-12-13T14:12:54.702613752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:54.703412 containerd[1482]: time="2024-12-13T14:12:54.703374392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.814990841s" Dec 13 14:12:54.703469 containerd[1482]: time="2024-12-13T14:12:54.703415632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 14:12:54.706353 containerd[1482]: time="2024-12-13T14:12:54.706317113Z" level=info msg="CreateContainer within sandbox \"fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:12:54.728167 containerd[1482]: time="2024-12-13T14:12:54.727994277Z" level=info msg="CreateContainer within sandbox \"fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1d4e67c5c2125b7cf89f6a940438d7eb434595ba0e9a954cc5359be4be65ddd1\"" Dec 13 14:12:54.729245 containerd[1482]: time="2024-12-13T14:12:54.729145197Z" level=info msg="StartContainer for \"1d4e67c5c2125b7cf89f6a940438d7eb434595ba0e9a954cc5359be4be65ddd1\"" Dec 13 14:12:54.767183 systemd[1]: Started cri-containerd-1d4e67c5c2125b7cf89f6a940438d7eb434595ba0e9a954cc5359be4be65ddd1.scope - libcontainer container 1d4e67c5c2125b7cf89f6a940438d7eb434595ba0e9a954cc5359be4be65ddd1. Dec 13 14:12:54.801557 containerd[1482]: time="2024-12-13T14:12:54.801436731Z" level=info msg="StartContainer for \"1d4e67c5c2125b7cf89f6a940438d7eb434595ba0e9a954cc5359be4be65ddd1\" returns successfully" Dec 13 14:12:54.804861 containerd[1482]: time="2024-12-13T14:12:54.804514972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:12:55.669094 kubelet[2030]: E1213 14:12:55.669017 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:56.503291 containerd[1482]: time="2024-12-13T14:12:56.503224539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:56.504988 containerd[1482]: time="2024-12-13T14:12:56.504914939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 14:12:56.506200 containerd[1482]: time="2024-12-13T14:12:56.506120339Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:56.511017 containerd[1482]: time="2024-12-13T14:12:56.509827020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:12:56.512796 containerd[1482]: time="2024-12-13T14:12:56.512740781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.708170689s" Dec 13 14:12:56.512978 containerd[1482]: time="2024-12-13T14:12:56.512928501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 14:12:56.516780 containerd[1482]: time="2024-12-13T14:12:56.516740022Z" level=info msg="CreateContainer within sandbox \"fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:12:56.533790 containerd[1482]: time="2024-12-13T14:12:56.533712345Z" level=info msg="CreateContainer within sandbox \"fd013d4b8492261edde547cdd49f97dccc0b0ce3c178ad312626fc1e4cee2510\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bfa2d3498082dee871b36e8b332cdf78f6db70d362576a2419ce8ffa79390274\"" Dec 13 14:12:56.534312 containerd[1482]: time="2024-12-13T14:12:56.534276865Z" level=info msg="StartContainer for \"bfa2d3498082dee871b36e8b332cdf78f6db70d362576a2419ce8ffa79390274\"" Dec 13 14:12:56.566158 systemd[1]: Started cri-containerd-bfa2d3498082dee871b36e8b332cdf78f6db70d362576a2419ce8ffa79390274.scope - libcontainer container bfa2d3498082dee871b36e8b332cdf78f6db70d362576a2419ce8ffa79390274. Dec 13 14:12:56.599372 containerd[1482]: time="2024-12-13T14:12:56.599123557Z" level=info msg="StartContainer for \"bfa2d3498082dee871b36e8b332cdf78f6db70d362576a2419ce8ffa79390274\" returns successfully" Dec 13 14:12:56.669917 kubelet[2030]: E1213 14:12:56.669840 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:56.753024 kubelet[2030]: I1213 14:12:56.752985 2030 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:12:56.753024 kubelet[2030]: I1213 14:12:56.753031 2030 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:12:56.960744 kubelet[2030]: I1213 14:12:56.960542 2030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-jd54l" podStartSLOduration=23.743714685 podStartE2EDuration="30.960466866s" podCreationTimestamp="2024-12-13 14:12:26 +0000 UTC" firstStartedPulling="2024-12-13 14:12:49.29699632 +0000 UTC m=+23.105807183" lastFinishedPulling="2024-12-13 14:12:56.513748541 +0000 UTC m=+30.322559364" observedRunningTime="2024-12-13 14:12:56.958880625 +0000 UTC m=+30.767691528" watchObservedRunningTime="2024-12-13 14:12:56.960466866 +0000 UTC m=+30.769277809" Dec 13 14:12:57.670566 kubelet[2030]: E1213 14:12:57.670472 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:58.671669 kubelet[2030]: E1213 14:12:58.671599 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:12:59.672202 kubelet[2030]: E1213 14:12:59.672133 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:00.388123 kubelet[2030]: I1213 14:13:00.387712 2030 topology_manager.go:215] "Topology Admit Handler" podUID="e72760d0-1ac0-4a4c-b1c1-10d511fa5784" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:13:00.397812 systemd[1]: Created slice kubepods-besteffort-pode72760d0_1ac0_4a4c_b1c1_10d511fa5784.slice - libcontainer container kubepods-besteffort-pode72760d0_1ac0_4a4c_b1c1_10d511fa5784.slice. Dec 13 14:13:00.485499 kubelet[2030]: I1213 14:13:00.485393 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2fjs\" (UniqueName: \"kubernetes.io/projected/e72760d0-1ac0-4a4c-b1c1-10d511fa5784-kube-api-access-m2fjs\") pod \"nfs-server-provisioner-0\" (UID: \"e72760d0-1ac0-4a4c-b1c1-10d511fa5784\") " pod="default/nfs-server-provisioner-0" Dec 13 14:13:00.485720 kubelet[2030]: I1213 14:13:00.485570 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e72760d0-1ac0-4a4c-b1c1-10d511fa5784-data\") pod \"nfs-server-provisioner-0\" (UID: \"e72760d0-1ac0-4a4c-b1c1-10d511fa5784\") " pod="default/nfs-server-provisioner-0" Dec 13 14:13:00.672815 kubelet[2030]: E1213 14:13:00.672665 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:00.704535 containerd[1482]: time="2024-12-13T14:13:00.704432109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e72760d0-1ac0-4a4c-b1c1-10d511fa5784,Namespace:default,Attempt:0,}" Dec 13 14:13:00.873720 systemd-networkd[1365]: cali60e51b789ff: Link UP Dec 13 14:13:00.875400 systemd-networkd[1365]: cali60e51b789ff: Gained carrier Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.775 [INFO][3514] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default e72760d0-1ac0-4a4c-b1c1-10d511fa5784 1634 0 2024-12-13 14:13:00 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.775 [INFO][3514] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.810 [INFO][3524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" HandleID="k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.830 [INFO][3524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" HandleID="k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d6d0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 14:13:00.810314327 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.830 [INFO][3524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.830 [INFO][3524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.830 [INFO][3524] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.833 [INFO][3524] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.839 [INFO][3524] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.846 [INFO][3524] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.849 [INFO][3524] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.852 [INFO][3524] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.852 [INFO][3524] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.855 [INFO][3524] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077 Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.861 [INFO][3524] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.867 [INFO][3524] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.868 [INFO][3524] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" host="10.0.0.4" Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.868 [INFO][3524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:13:00.889746 containerd[1482]: 2024-12-13 14:13:00.868 [INFO][3524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" HandleID="k8s-pod-network.0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 14:13:00.890645 containerd[1482]: 2024-12-13 14:13:00.871 [INFO][3514] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e72760d0-1ac0-4a4c-b1c1-10d511fa5784", ResourceVersion:"1634", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:13:00.890645 containerd[1482]: 2024-12-13 14:13:00.871 [INFO][3514] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 14:13:00.890645 containerd[1482]: 2024-12-13 14:13:00.871 [INFO][3514] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 14:13:00.890645 containerd[1482]: 2024-12-13 14:13:00.874 [INFO][3514] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 14:13:00.891055 containerd[1482]: 2024-12-13 14:13:00.876 [INFO][3514] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e72760d0-1ac0-4a4c-b1c1-10d511fa5784", ResourceVersion:"1634", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 13, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"52:62:2f:48:54:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:13:00.891055 containerd[1482]: 2024-12-13 14:13:00.887 [INFO][3514] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Dec 13 14:13:00.915750 containerd[1482]: time="2024-12-13T14:13:00.915608426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:13:00.915750 containerd[1482]: time="2024-12-13T14:13:00.915688626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:13:00.915750 containerd[1482]: time="2024-12-13T14:13:00.915719866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:13:00.917021 containerd[1482]: time="2024-12-13T14:13:00.915817306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:13:00.941340 systemd[1]: Started cri-containerd-0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077.scope - libcontainer container 0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077. Dec 13 14:13:00.982535 containerd[1482]: time="2024-12-13T14:13:00.982399038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e72760d0-1ac0-4a4c-b1c1-10d511fa5784,Namespace:default,Attempt:0,} returns sandbox id \"0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077\"" Dec 13 14:13:00.984497 containerd[1482]: time="2024-12-13T14:13:00.984271118Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:13:01.673809 kubelet[2030]: E1213 14:13:01.673748 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:02.674883 kubelet[2030]: E1213 14:13:02.674824 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:02.896338 systemd-networkd[1365]: cali60e51b789ff: Gained IPv6LL Dec 13 14:13:03.675235 kubelet[2030]: E1213 14:13:03.675174 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:04.418962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725061335.mount: Deactivated successfully. Dec 13 14:13:04.676199 kubelet[2030]: E1213 14:13:04.676042 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:05.677150 kubelet[2030]: E1213 14:13:05.677106 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:06.132443 containerd[1482]: time="2024-12-13T14:13:06.132377189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:13:06.134679 containerd[1482]: time="2024-12-13T14:13:06.134613469Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373691" Dec 13 14:13:06.136244 containerd[1482]: time="2024-12-13T14:13:06.136176189Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:13:06.140620 containerd[1482]: time="2024-12-13T14:13:06.140018910Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:13:06.141296 containerd[1482]: time="2024-12-13T14:13:06.141254310Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.156943512s" Dec 13 14:13:06.141296 containerd[1482]: time="2024-12-13T14:13:06.141292190Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 14:13:06.144339 containerd[1482]: time="2024-12-13T14:13:06.144299271Z" level=info msg="CreateContainer within sandbox \"0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:13:06.163939 containerd[1482]: time="2024-12-13T14:13:06.163849274Z" level=info msg="CreateContainer within sandbox \"0f76f3a3e92c2c52397e3afe04df57045cab5a2c33fc8636e4ea73d599126077\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2a19020d974a75ba7534caf6e88fd21a337b64af842a4a4350ea899f29cb90d2\"" Dec 13 14:13:06.167164 containerd[1482]: time="2024-12-13T14:13:06.165797554Z" level=info msg="StartContainer for \"2a19020d974a75ba7534caf6e88fd21a337b64af842a4a4350ea899f29cb90d2\"" Dec 13 14:13:06.204198 systemd[1]: Started cri-containerd-2a19020d974a75ba7534caf6e88fd21a337b64af842a4a4350ea899f29cb90d2.scope - libcontainer container 2a19020d974a75ba7534caf6e88fd21a337b64af842a4a4350ea899f29cb90d2. Dec 13 14:13:06.235691 containerd[1482]: time="2024-12-13T14:13:06.235644165Z" level=info msg="StartContainer for \"2a19020d974a75ba7534caf6e88fd21a337b64af842a4a4350ea899f29cb90d2\" returns successfully" Dec 13 14:13:06.642609 kubelet[2030]: E1213 14:13:06.642533 2030 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:06.678188 kubelet[2030]: E1213 14:13:06.678111 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:06.987732 kubelet[2030]: I1213 14:13:06.987365 2030 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.829639775 podStartE2EDuration="6.987298807s" podCreationTimestamp="2024-12-13 14:13:00 +0000 UTC" firstStartedPulling="2024-12-13 14:13:00.983940638 +0000 UTC m=+34.792751501" lastFinishedPulling="2024-12-13 14:13:06.14159963 +0000 UTC m=+39.950410533" observedRunningTime="2024-12-13 14:13:06.986559447 +0000 UTC m=+40.795370350" watchObservedRunningTime="2024-12-13 14:13:06.987298807 +0000 UTC m=+40.796109670" Dec 13 14:13:07.678391 kubelet[2030]: E1213 14:13:07.678328 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:08.679641 kubelet[2030]: E1213 14:13:08.679496 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:09.680574 kubelet[2030]: E1213 14:13:09.680443 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:10.681284 kubelet[2030]: E1213 14:13:10.681209 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:11.682111 kubelet[2030]: E1213 14:13:11.682027 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:12.682481 kubelet[2030]: E1213 14:13:12.682409 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:13.683495 kubelet[2030]: E1213 14:13:13.683257 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:14.684408 kubelet[2030]: E1213 14:13:14.684292 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:15.560561 kubelet[2030]: I1213 14:13:15.559879 2030 topology_manager.go:215] "Topology Admit Handler" podUID="79c26da3-c37b-4435-b867-054995220941" podNamespace="default" podName="test-pod-1" Dec 13 14:13:15.569140 systemd[1]: Created slice kubepods-besteffort-pod79c26da3_c37b_4435_b867_054995220941.slice - libcontainer container kubepods-besteffort-pod79c26da3_c37b_4435_b867_054995220941.slice. Dec 13 14:13:15.680650 kubelet[2030]: I1213 14:13:15.680205 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0e0ee083-63c7-4469-9cbb-03d84d7d9ee8\" (UniqueName: \"kubernetes.io/nfs/79c26da3-c37b-4435-b867-054995220941-pvc-0e0ee083-63c7-4469-9cbb-03d84d7d9ee8\") pod \"test-pod-1\" (UID: \"79c26da3-c37b-4435-b867-054995220941\") " pod="default/test-pod-1" Dec 13 14:13:15.680650 kubelet[2030]: I1213 14:13:15.680276 2030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2nkl\" (UniqueName: \"kubernetes.io/projected/79c26da3-c37b-4435-b867-054995220941-kube-api-access-x2nkl\") pod \"test-pod-1\" (UID: \"79c26da3-c37b-4435-b867-054995220941\") " pod="default/test-pod-1" Dec 13 14:13:15.684572 kubelet[2030]: E1213 14:13:15.684504 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:15.815029 kernel: FS-Cache: Loaded Dec 13 14:13:15.841380 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:13:15.841523 kernel: RPC: Registered udp transport module. Dec 13 14:13:15.841859 kernel: RPC: Registered tcp transport module. Dec 13 14:13:15.841893 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 14:13:15.841916 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:13:16.007193 kernel: NFS: Registering the id_resolver key type Dec 13 14:13:16.007344 kernel: Key type id_resolver registered Dec 13 14:13:16.007414 kernel: Key type id_legacy registered Dec 13 14:13:16.038293 nfsidmap[3733]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:13:16.042499 nfsidmap[3734]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:13:16.176864 containerd[1482]: time="2024-12-13T14:13:16.176139112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:79c26da3-c37b-4435-b867-054995220941,Namespace:default,Attempt:0,}" Dec 13 14:13:16.376393 systemd-networkd[1365]: cali5ec59c6bf6e: Link UP Dec 13 14:13:16.377088 systemd-networkd[1365]: cali5ec59c6bf6e: Gained carrier Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.261 [INFO][3736] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default 79c26da3-c37b-4435-b867-054995220941 1702 0 2024-12-13 14:13:02 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.261 [INFO][3736] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.297 [INFO][3746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" HandleID="k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Workload="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.318 [INFO][3746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" HandleID="k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000384970), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2024-12-13 14:13:16.297371009 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.318 [INFO][3746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.318 [INFO][3746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.319 [INFO][3746] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.322 [INFO][3746] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.328 [INFO][3746] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.335 [INFO][3746] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.338 [INFO][3746] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.343 [INFO][3746] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.343 [INFO][3746] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.347 [INFO][3746] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7 Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.352 [INFO][3746] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.368 [INFO][3746] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.368 [INFO][3746] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" host="10.0.0.4" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.368 [INFO][3746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.368 [INFO][3746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" HandleID="k8s-pod-network.02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Workload="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 14:13:16.393458 containerd[1482]: 2024-12-13 14:13:16.371 [INFO][3736] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"79c26da3-c37b-4435-b867-054995220941", ResourceVersion:"1702", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:13:16.394217 containerd[1482]: 2024-12-13 14:13:16.371 [INFO][3736] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 14:13:16.394217 containerd[1482]: 2024-12-13 14:13:16.371 [INFO][3736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 14:13:16.394217 containerd[1482]: 2024-12-13 14:13:16.377 [INFO][3736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 14:13:16.394217 containerd[1482]: 2024-12-13 14:13:16.377 [INFO][3736] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"79c26da3-c37b-4435-b867-054995220941", ResourceVersion:"1702", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"42:78:d3:71:63:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:13:16.394217 containerd[1482]: 2024-12-13 14:13:16.391 [INFO][3736] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Dec 13 14:13:16.421965 containerd[1482]: time="2024-12-13T14:13:16.421771307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:13:16.421965 containerd[1482]: time="2024-12-13T14:13:16.421873467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:13:16.421965 containerd[1482]: time="2024-12-13T14:13:16.421898067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:13:16.422477 containerd[1482]: time="2024-12-13T14:13:16.422309347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:13:16.446259 systemd[1]: Started cri-containerd-02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7.scope - libcontainer container 02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7. Dec 13 14:13:16.483509 containerd[1482]: time="2024-12-13T14:13:16.483461475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:79c26da3-c37b-4435-b867-054995220941,Namespace:default,Attempt:0,} returns sandbox id \"02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7\"" Dec 13 14:13:16.485878 containerd[1482]: time="2024-12-13T14:13:16.485830276Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:13:16.685601 kubelet[2030]: E1213 14:13:16.685528 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:16.868549 containerd[1482]: time="2024-12-13T14:13:16.868495810Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 14:13:16.869395 containerd[1482]: time="2024-12-13T14:13:16.869321130Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 14:13:16.872935 containerd[1482]: time="2024-12-13T14:13:16.872882450Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 386.999534ms" Dec 13 14:13:16.872935 containerd[1482]: time="2024-12-13T14:13:16.872927490Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 14:13:16.875124 containerd[1482]: time="2024-12-13T14:13:16.874990931Z" level=info msg="CreateContainer within sandbox \"02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:13:16.900229 containerd[1482]: time="2024-12-13T14:13:16.900178094Z" level=info msg="CreateContainer within sandbox \"02f66a26d2534df70d6f731e5350a71b2b48868b3dd8e8bc8be28904b60402c7\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"63f0f1abb748ccec76a6738d4f074c96c38fae593e6ad9d26b7ff71143c8e0e8\"" Dec 13 14:13:16.902485 containerd[1482]: time="2024-12-13T14:13:16.901240014Z" level=info msg="StartContainer for \"63f0f1abb748ccec76a6738d4f074c96c38fae593e6ad9d26b7ff71143c8e0e8\"" Dec 13 14:13:16.936318 systemd[1]: Started cri-containerd-63f0f1abb748ccec76a6738d4f074c96c38fae593e6ad9d26b7ff71143c8e0e8.scope - libcontainer container 63f0f1abb748ccec76a6738d4f074c96c38fae593e6ad9d26b7ff71143c8e0e8. Dec 13 14:13:16.965683 containerd[1482]: time="2024-12-13T14:13:16.965542703Z" level=info msg="StartContainer for \"63f0f1abb748ccec76a6738d4f074c96c38fae593e6ad9d26b7ff71143c8e0e8\" returns successfully" Dec 13 14:13:17.686593 kubelet[2030]: E1213 14:13:17.686533 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:17.872454 systemd-networkd[1365]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 14:13:18.688417 kubelet[2030]: E1213 14:13:18.686663 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:19.687220 kubelet[2030]: E1213 14:13:19.687151 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:20.688228 kubelet[2030]: E1213 14:13:20.688055 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:21.688373 kubelet[2030]: E1213 14:13:21.688280 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:22.689591 kubelet[2030]: E1213 14:13:22.689498 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:13:23.690908 kubelet[2030]: E1213 14:13:23.690686 2030 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"