Jan 13 20:22:00.894327 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:22:00.894376 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:22:00.894387 kernel: KASLR enabled Jan 13 20:22:00.894393 kernel: efi: EFI v2.7 by EDK II Jan 13 20:22:00.894399 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4f698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132303d98 Jan 13 20:22:00.894404 kernel: random: crng init done Jan 13 20:22:00.894411 kernel: secureboot: Secure boot disabled Jan 13 20:22:00.894417 kernel: ACPI: Early table checksum verification disabled Jan 13 20:22:00.894423 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 13 20:22:00.894429 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:22:00.894437 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894443 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894449 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894455 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894462 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894470 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894476 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894483 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894489 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:00.894495 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:22:00.894501 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 20:22:00.894507 kernel: NUMA: Failed to initialise from firmware Jan 13 20:22:00.894536 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:22:00.894542 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Jan 13 20:22:00.894548 kernel: Zone ranges: Jan 13 20:22:00.894554 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:22:00.894562 kernel: DMA32 empty Jan 13 20:22:00.894568 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 13 20:22:00.894574 kernel: Movable zone start for each node Jan 13 20:22:00.894581 kernel: Early memory node ranges Jan 13 20:22:00.894587 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 13 20:22:00.894593 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 13 20:22:00.894599 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 13 20:22:00.894605 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 13 20:22:00.894611 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 13 20:22:00.894618 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:22:00.894624 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 13 20:22:00.894631 kernel: psci: probing for conduit method from ACPI. Jan 13 20:22:00.894638 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:22:00.894644 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:22:00.894653 kernel: psci: Trusted OS migration not required Jan 13 20:22:00.894660 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:22:00.894666 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:22:00.894675 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:22:00.894682 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:22:00.894688 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:22:00.894695 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:22:00.894702 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:22:00.894708 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:22:00.894715 kernel: CPU features: detected: Spectre-v4 Jan 13 20:22:00.894722 kernel: CPU features: detected: Spectre-BHB Jan 13 20:22:00.894774 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:22:00.894783 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:22:00.894790 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:22:00.894799 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:22:00.894805 kernel: alternatives: applying boot alternatives Jan 13 20:22:00.894814 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:22:00.894821 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:22:00.894827 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:22:00.894834 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:22:00.894841 kernel: Fallback order for Node 0: 0 Jan 13 20:22:00.894848 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 13 20:22:00.894854 kernel: Policy zone: Normal Jan 13 20:22:00.894861 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:22:00.894868 kernel: software IO TLB: area num 2. Jan 13 20:22:00.894876 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 13 20:22:00.894883 kernel: Memory: 3881336K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 214664K reserved, 0K cma-reserved) Jan 13 20:22:00.894890 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:22:00.894897 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:22:00.894904 kernel: rcu: RCU event tracing is enabled. Jan 13 20:22:00.894911 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:22:00.894918 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:22:00.894924 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:22:00.894931 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:22:00.894938 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:22:00.894944 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:22:00.894953 kernel: GICv3: 256 SPIs implemented Jan 13 20:22:00.894959 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:22:00.894966 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:22:00.894972 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:22:00.894979 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:22:00.894986 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:22:00.894992 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:22:00.894999 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:22:00.895006 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 13 20:22:00.895012 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 13 20:22:00.895019 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:22:00.895028 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:00.895035 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:22:00.895042 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:22:00.895049 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:22:00.895055 kernel: Console: colour dummy device 80x25 Jan 13 20:22:00.895062 kernel: ACPI: Core revision 20230628 Jan 13 20:22:00.895069 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:22:00.895076 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:22:00.895083 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:22:00.895090 kernel: landlock: Up and running. Jan 13 20:22:00.895099 kernel: SELinux: Initializing. Jan 13 20:22:00.895105 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:22:00.895112 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:22:00.895119 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:22:00.895127 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:22:00.895133 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:22:00.895141 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:22:00.895148 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:22:00.895154 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:22:00.895163 kernel: Remapping and enabling EFI services. Jan 13 20:22:00.895169 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:22:00.895176 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:22:00.895183 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:22:00.895190 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 13 20:22:00.895197 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:00.895204 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:22:00.895211 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:22:00.895218 kernel: SMP: Total of 2 processors activated. Jan 13 20:22:00.895224 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:22:00.895233 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:22:00.895240 kernel: CPU features: detected: Common not Private translations Jan 13 20:22:00.895252 kernel: CPU features: detected: CRC32 instructions Jan 13 20:22:00.896320 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:22:00.896336 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:22:00.896344 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:22:00.896352 kernel: CPU features: detected: Privileged Access Never Jan 13 20:22:00.896359 kernel: CPU features: detected: RAS Extension Support Jan 13 20:22:00.896367 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:22:00.896381 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:22:00.896388 kernel: alternatives: applying system-wide alternatives Jan 13 20:22:00.896395 kernel: devtmpfs: initialized Jan 13 20:22:00.896403 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:22:00.896410 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:22:00.896417 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:22:00.896425 kernel: SMBIOS 3.0.0 present. Jan 13 20:22:00.896434 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 13 20:22:00.896441 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:22:00.896448 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:22:00.896455 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:22:00.896463 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:22:00.896470 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:22:00.896478 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jan 13 20:22:00.896485 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:22:00.896492 kernel: cpuidle: using governor menu Jan 13 20:22:00.896501 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:22:00.896508 kernel: ASID allocator initialised with 32768 entries Jan 13 20:22:00.896515 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:22:00.896522 kernel: Serial: AMBA PL011 UART driver Jan 13 20:22:00.896530 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:22:00.896537 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:22:00.896544 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:22:00.896552 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:22:00.896559 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:22:00.896568 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:22:00.896576 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:22:00.896583 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:22:00.896591 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:22:00.896598 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:22:00.896605 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:22:00.896612 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:22:00.896619 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:22:00.896627 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:22:00.896635 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:22:00.896642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:22:00.896650 kernel: ACPI: Interpreter enabled Jan 13 20:22:00.896657 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:22:00.896664 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:22:00.896672 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:22:00.896679 kernel: printk: console [ttyAMA0] enabled Jan 13 20:22:00.896686 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:22:00.896901 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:22:00.896981 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:22:00.897049 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:22:00.897114 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:22:00.897176 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:22:00.897186 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:22:00.897193 kernel: PCI host bridge to bus 0000:00 Jan 13 20:22:00.897283 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:22:00.897654 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:22:00.897745 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:22:00.897822 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:22:00.897909 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:22:00.897993 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 13 20:22:00.898060 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 13 20:22:00.898133 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:22:00.898206 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.899359 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 13 20:22:00.899471 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.899542 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 13 20:22:00.899615 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.899688 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 13 20:22:00.899816 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.899889 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 13 20:22:00.899964 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.900032 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 13 20:22:00.900121 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.900211 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 13 20:22:00.900317 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.900385 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 13 20:22:00.901512 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.901600 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 13 20:22:00.901677 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:22:00.901769 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 13 20:22:00.901856 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 13 20:22:00.901928 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 13 20:22:00.902007 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:22:00.905549 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 13 20:22:00.905665 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:22:00.905768 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:22:00.905865 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:22:00.905936 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 13 20:22:00.906013 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 13 20:22:00.906082 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 13 20:22:00.906150 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 20:22:00.906231 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 13 20:22:00.906328 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 20:22:00.906412 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:22:00.906480 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 20:22:00.906555 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 13 20:22:00.906686 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 13 20:22:00.906857 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:22:00.906988 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:22:00.907115 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 13 20:22:00.907245 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 13 20:22:00.908608 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:22:00.908752 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 20:22:00.908828 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:22:00.908894 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:22:00.909025 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 20:22:00.909145 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 20:22:00.909244 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 20:22:00.909410 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:22:00.909480 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:22:00.909542 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:22:00.909608 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:22:00.909671 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 20:22:00.909783 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 20:22:00.909862 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:22:00.909927 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 20:22:00.909990 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jan 13 20:22:00.910057 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:22:00.910122 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:22:00.910184 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:22:00.910270 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:22:00.910339 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:22:00.910403 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:22:00.910478 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:22:00.910544 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:22:00.910609 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:22:00.910678 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:22:00.910754 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:22:00.910826 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:22:00.910891 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 13 20:22:00.910957 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:22:00.911022 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 13 20:22:00.911087 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:22:00.911152 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 13 20:22:00.911216 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:22:00.912586 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 13 20:22:00.912683 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:22:00.912790 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 13 20:22:00.912861 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:22:00.912928 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 13 20:22:00.912994 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:22:00.913067 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 13 20:22:00.913133 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:22:00.913200 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 13 20:22:00.914637 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:22:00.914788 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 13 20:22:00.914861 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:22:00.914931 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 13 20:22:00.915002 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 13 20:22:00.915067 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 13 20:22:00.915130 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:22:00.915195 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 13 20:22:00.915269 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:22:00.915343 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 13 20:22:00.915409 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:22:00.915474 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 13 20:22:00.915541 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:22:00.915606 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 13 20:22:00.915675 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:22:00.915756 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 13 20:22:00.915827 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:22:00.915894 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 13 20:22:00.915958 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:22:00.916025 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 13 20:22:00.916092 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:22:00.916156 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 13 20:22:00.916219 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 13 20:22:00.917124 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 13 20:22:00.917223 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 13 20:22:00.917334 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:22:00.917406 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 13 20:22:00.917474 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 13 20:22:00.917546 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:22:00.917612 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 20:22:00.917676 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:22:00.917770 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 13 20:22:00.917843 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 13 20:22:00.917913 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:22:00.917978 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 20:22:00.918043 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:22:00.918116 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:22:00.918185 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 13 20:22:00.918251 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 13 20:22:00.918356 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:22:00.918428 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 20:22:00.918492 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:22:00.918569 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:22:00.918642 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 13 20:22:00.918709 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:22:00.918788 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 20:22:00.918854 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:22:00.918928 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 13 20:22:00.918998 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 13 20:22:00.919061 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:22:00.919125 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 20:22:00.919190 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:22:00.919316 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 13 20:22:00.919390 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 13 20:22:00.919455 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 13 20:22:00.919517 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:22:00.919582 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 20:22:00.919644 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:22:00.919718 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 13 20:22:00.919835 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 13 20:22:00.919906 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 13 20:22:00.919973 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 13 20:22:00.920038 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:22:00.920102 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 20:22:00.920171 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:22:00.920237 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 13 20:22:00.920359 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:22:00.920430 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 20:22:00.920500 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:22:00.920565 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 13 20:22:00.920628 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 13 20:22:00.920690 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 20:22:00.920774 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:22:00.920844 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:22:00.920901 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:22:00.920957 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:22:00.921028 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:22:00.921088 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 20:22:00.921145 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:22:00.921214 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 13 20:22:00.921346 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 20:22:00.921411 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:22:00.921479 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 13 20:22:00.921537 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 20:22:00.921595 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:22:00.921663 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:22:00.921721 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 20:22:00.921832 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:22:00.921919 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 13 20:22:00.921982 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 20:22:00.922040 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:22:00.922105 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 13 20:22:00.922166 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 20:22:00.922223 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:22:00.922324 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 13 20:22:00.922391 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 20:22:00.922457 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:22:00.922524 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 13 20:22:00.922585 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 20:22:00.922643 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:22:00.922709 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 13 20:22:00.922789 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 20:22:00.922851 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:22:00.922865 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:22:00.922874 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:22:00.922881 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:22:00.922889 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:22:00.922897 kernel: iommu: Default domain type: Translated Jan 13 20:22:00.922905 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:22:00.922912 kernel: efivars: Registered efivars operations Jan 13 20:22:00.922920 kernel: vgaarb: loaded Jan 13 20:22:00.922928 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:22:00.922937 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:22:00.922945 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:22:00.922953 kernel: pnp: PnP ACPI init Jan 13 20:22:00.923024 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:22:00.923036 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:22:00.923043 kernel: NET: Registered PF_INET protocol family Jan 13 20:22:00.923051 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:22:00.923060 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:22:00.923070 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:22:00.923078 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:22:00.923086 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:22:00.923094 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:22:00.923102 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:22:00.923109 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:22:00.923117 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:22:00.923191 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 20:22:00.923202 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:22:00.923213 kernel: kvm [1]: HYP mode not available Jan 13 20:22:00.923220 kernel: Initialise system trusted keyrings Jan 13 20:22:00.923228 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:22:00.923236 kernel: Key type asymmetric registered Jan 13 20:22:00.923243 kernel: Asymmetric key parser 'x509' registered Jan 13 20:22:00.923251 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:22:00.923306 kernel: io scheduler mq-deadline registered Jan 13 20:22:00.923315 kernel: io scheduler kyber registered Jan 13 20:22:00.923322 kernel: io scheduler bfq registered Jan 13 20:22:00.923333 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:22:00.923415 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 13 20:22:00.923480 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 13 20:22:00.923545 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.923610 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 13 20:22:00.923674 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 13 20:22:00.923756 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.923832 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 13 20:22:00.923897 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 13 20:22:00.923962 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.924030 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 13 20:22:00.924098 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 13 20:22:00.924164 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.924232 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 13 20:22:00.924319 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 13 20:22:00.924387 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.924454 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 13 20:22:00.924519 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 13 20:22:00.924587 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.924655 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 13 20:22:00.924721 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 13 20:22:00.924838 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.924911 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 13 20:22:00.924976 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 13 20:22:00.925045 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.925056 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 20:22:00.925121 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 13 20:22:00.925190 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 13 20:22:00.925254 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:22:00.925328 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:22:00.925336 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:22:00.925348 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:22:00.925432 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 13 20:22:00.925504 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 20:22:00.925573 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 13 20:22:00.925584 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:22:00.925592 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:22:00.925656 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 13 20:22:00.925667 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 13 20:22:00.925674 kernel: thunder_xcv, ver 1.0 Jan 13 20:22:00.925685 kernel: thunder_bgx, ver 1.0 Jan 13 20:22:00.925692 kernel: nicpf, ver 1.0 Jan 13 20:22:00.925700 kernel: nicvf, ver 1.0 Jan 13 20:22:00.925801 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:22:00.925865 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:22:00 UTC (1736799720) Jan 13 20:22:00.925875 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:22:00.925883 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:22:00.925891 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:22:00.925901 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:22:00.925909 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:22:00.925917 kernel: Segment Routing with IPv6 Jan 13 20:22:00.925924 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:22:00.925932 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:22:00.925940 kernel: Key type dns_resolver registered Jan 13 20:22:00.925948 kernel: registered taskstats version 1 Jan 13 20:22:00.925955 kernel: Loading compiled-in X.509 certificates Jan 13 20:22:00.925963 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:22:00.925973 kernel: Key type .fscrypt registered Jan 13 20:22:00.925980 kernel: Key type fscrypt-provisioning registered Jan 13 20:22:00.925988 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:22:00.925996 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:22:00.926003 kernel: ima: No architecture policies found Jan 13 20:22:00.926011 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:22:00.926018 kernel: clk: Disabling unused clocks Jan 13 20:22:00.926026 kernel: Freeing unused kernel memory: 39680K Jan 13 20:22:00.926034 kernel: Run /init as init process Jan 13 20:22:00.926043 kernel: with arguments: Jan 13 20:22:00.926051 kernel: /init Jan 13 20:22:00.926058 kernel: with environment: Jan 13 20:22:00.926066 kernel: HOME=/ Jan 13 20:22:00.926074 kernel: TERM=linux Jan 13 20:22:00.926081 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:22:00.926091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:22:00.926101 systemd[1]: Detected virtualization kvm. Jan 13 20:22:00.926111 systemd[1]: Detected architecture arm64. Jan 13 20:22:00.926119 systemd[1]: Running in initrd. Jan 13 20:22:00.926127 systemd[1]: No hostname configured, using default hostname. Jan 13 20:22:00.926135 systemd[1]: Hostname set to . Jan 13 20:22:00.926144 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:22:00.926152 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:22:00.926160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:00.926168 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:00.926178 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:22:00.926186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:22:00.926194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:22:00.926203 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:22:00.926213 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:22:00.926222 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:22:00.926232 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:00.926240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:00.926248 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:22:00.926256 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:22:00.926279 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:22:00.926287 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:22:00.926296 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:22:00.926304 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:22:00.926312 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:22:00.926323 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:22:00.926332 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:00.926340 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:00.926348 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:00.926356 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:22:00.926364 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:22:00.926373 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:22:00.926381 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:22:00.926392 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:22:00.926400 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:22:00.926408 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:22:00.926417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:00.926425 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:22:00.926433 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:00.926463 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 20:22:00.926486 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:22:00.926495 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:22:00.926505 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:22:00.926513 kernel: Bridge firewalling registered Jan 13 20:22:00.926521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:00.926530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:00.926538 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:00.926548 systemd-journald[238]: Journal started Jan 13 20:22:00.926568 systemd-journald[238]: Runtime Journal (/run/log/journal/f747c47a4ce84d03b57ba9ee3a76191c) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:22:00.885248 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 20:22:00.908279 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 20:22:00.930677 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:22:00.934234 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:22:00.933316 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:22:00.952466 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:22:00.954604 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:22:00.959357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:00.960207 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:00.963433 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:22:00.965331 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:00.974299 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:00.980837 dracut-cmdline[268]: dracut-dracut-053 Jan 13 20:22:00.981805 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:22:00.983853 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:22:01.018233 systemd-resolved[277]: Positive Trust Anchors: Jan 13 20:22:01.018936 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:22:01.018970 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:22:01.027906 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 13 20:22:01.029709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:22:01.031911 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:01.088334 kernel: SCSI subsystem initialized Jan 13 20:22:01.092296 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:22:01.100314 kernel: iscsi: registered transport (tcp) Jan 13 20:22:01.114282 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:22:01.114344 kernel: QLogic iSCSI HBA Driver Jan 13 20:22:01.163216 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:22:01.176639 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:22:01.201563 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:22:01.201644 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:22:01.201663 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:22:01.251310 kernel: raid6: neonx8 gen() 15631 MB/s Jan 13 20:22:01.268342 kernel: raid6: neonx4 gen() 14062 MB/s Jan 13 20:22:01.285342 kernel: raid6: neonx2 gen() 13154 MB/s Jan 13 20:22:01.302328 kernel: raid6: neonx1 gen() 10432 MB/s Jan 13 20:22:01.319705 kernel: raid6: int64x8 gen() 6903 MB/s Jan 13 20:22:01.336326 kernel: raid6: int64x4 gen() 7318 MB/s Jan 13 20:22:01.353318 kernel: raid6: int64x2 gen() 6104 MB/s Jan 13 20:22:01.370325 kernel: raid6: int64x1 gen() 5034 MB/s Jan 13 20:22:01.370402 kernel: raid6: using algorithm neonx8 gen() 15631 MB/s Jan 13 20:22:01.387345 kernel: raid6: .... xor() 11860 MB/s, rmw enabled Jan 13 20:22:01.387435 kernel: raid6: using neon recovery algorithm Jan 13 20:22:01.392297 kernel: xor: measuring software checksum speed Jan 13 20:22:01.392355 kernel: 8regs : 19783 MB/sec Jan 13 20:22:01.392375 kernel: 32regs : 19660 MB/sec Jan 13 20:22:01.392394 kernel: arm64_neon : 24338 MB/sec Jan 13 20:22:01.393293 kernel: xor: using function: arm64_neon (24338 MB/sec) Jan 13 20:22:01.443322 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:22:01.455720 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:22:01.462604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:01.476508 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jan 13 20:22:01.479991 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:01.489902 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:22:01.502783 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jan 13 20:22:01.540708 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:22:01.546534 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:22:01.601523 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:01.610474 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:22:01.627844 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:22:01.630637 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:22:01.632205 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:01.633561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:22:01.642105 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:22:01.668347 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:22:01.720623 kernel: ACPI: bus type USB registered Jan 13 20:22:01.720683 kernel: usbcore: registered new interface driver usbfs Jan 13 20:22:01.720695 kernel: usbcore: registered new interface driver hub Jan 13 20:22:01.722071 kernel: usbcore: registered new device driver usb Jan 13 20:22:01.749388 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:22:01.751733 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:22:01.751806 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 13 20:22:01.756445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:22:01.757649 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:01.758951 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:01.759597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:01.759767 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:01.763020 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:01.774085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:01.787819 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:22:01.796483 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:22:01.796599 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:22:01.796683 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:22:01.796787 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:22:01.796869 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:22:01.796947 kernel: hub 1-0:1.0: USB hub found Jan 13 20:22:01.797043 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:22:01.797127 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:22:01.797222 kernel: hub 2-0:1.0: USB hub found Jan 13 20:22:01.798536 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:22:01.804339 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 13 20:22:01.805632 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 13 20:22:01.805827 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:22:01.805840 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:22:01.809316 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:01.818558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:01.823687 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 13 20:22:01.836068 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 13 20:22:01.836190 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 13 20:22:01.836292 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 13 20:22:01.836391 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:22:01.836480 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:22:01.836490 kernel: GPT:17805311 != 80003071 Jan 13 20:22:01.836499 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:22:01.836508 kernel: GPT:17805311 != 80003071 Jan 13 20:22:01.836517 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:22:01.836526 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:22:01.836535 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 13 20:22:01.847575 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:01.878321 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (511) Jan 13 20:22:01.884292 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (512) Jan 13 20:22:01.886414 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 13 20:22:01.897163 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 13 20:22:01.904797 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:22:01.909786 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 13 20:22:01.911910 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 13 20:22:01.922818 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:22:01.931299 disk-uuid[572]: Primary Header is updated. Jan 13 20:22:01.931299 disk-uuid[572]: Secondary Entries is updated. Jan 13 20:22:01.931299 disk-uuid[572]: Secondary Header is updated. Jan 13 20:22:01.939312 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:22:02.041410 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:22:02.284335 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 20:22:02.419341 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 20:22:02.419481 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 20:22:02.421294 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 20:22:02.476387 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 20:22:02.476646 kernel: usbcore: registered new interface driver usbhid Jan 13 20:22:02.476663 kernel: usbhid: USB HID core driver Jan 13 20:22:02.951335 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:22:02.952599 disk-uuid[573]: The operation has completed successfully. Jan 13 20:22:02.998946 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:22:03.000367 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:22:03.017549 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:22:03.022931 sh[588]: Success Jan 13 20:22:03.040295 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:22:03.097901 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:22:03.107525 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:22:03.111365 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:22:03.137394 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:22:03.137468 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:03.137493 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:22:03.138367 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:22:03.138408 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:22:03.145302 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:22:03.146852 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:22:03.148819 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:22:03.159669 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:22:03.164475 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:22:03.176299 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:03.176361 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:03.177279 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:22:03.180288 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:22:03.180329 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:22:03.190565 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:03.190241 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:22:03.197779 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:22:03.203485 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:22:03.294755 ignition[677]: Ignition 2.20.0 Jan 13 20:22:03.294764 ignition[677]: Stage: fetch-offline Jan 13 20:22:03.294799 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:03.294806 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:03.297538 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:22:03.294957 ignition[677]: parsed url from cmdline: "" Jan 13 20:22:03.294961 ignition[677]: no config URL provided Jan 13 20:22:03.294965 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:22:03.294972 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:22:03.294977 ignition[677]: failed to fetch config: resource requires networking Jan 13 20:22:03.295141 ignition[677]: Ignition finished successfully Jan 13 20:22:03.313125 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:22:03.318437 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:22:03.340103 systemd-networkd[778]: lo: Link UP Jan 13 20:22:03.340666 systemd-networkd[778]: lo: Gained carrier Jan 13 20:22:03.342373 systemd-networkd[778]: Enumeration completed Jan 13 20:22:03.342570 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:22:03.343181 systemd[1]: Reached target network.target - Network. Jan 13 20:22:03.344240 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:03.344243 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:03.345041 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:03.345044 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:03.347018 systemd-networkd[778]: eth0: Link UP Jan 13 20:22:03.347021 systemd-networkd[778]: eth0: Gained carrier Jan 13 20:22:03.347029 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:03.352702 systemd-networkd[778]: eth1: Link UP Jan 13 20:22:03.352711 systemd-networkd[778]: eth1: Gained carrier Jan 13 20:22:03.352734 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:03.354516 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:22:03.366613 ignition[780]: Ignition 2.20.0 Jan 13 20:22:03.366625 ignition[780]: Stage: fetch Jan 13 20:22:03.366824 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:03.366834 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:03.366939 ignition[780]: parsed url from cmdline: "" Jan 13 20:22:03.366943 ignition[780]: no config URL provided Jan 13 20:22:03.366948 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:22:03.366956 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:22:03.367043 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 13 20:22:03.367922 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 20:22:03.391384 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:22:03.415367 systemd-networkd[778]: eth0: DHCPv4 address 138.199.153.209/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:22:03.568493 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 13 20:22:03.572328 ignition[780]: GET result: OK Jan 13 20:22:03.572399 ignition[780]: parsing config with SHA512: 8dce38a21573a0a08d8da54b3bc80aede25387fb3502b70e1e042604c43838db48a79cc6bc7255588991ef83b648d2808f052f349fffd79f45fcda4d7907cce6 Jan 13 20:22:03.577143 unknown[780]: fetched base config from "system" Jan 13 20:22:03.577158 unknown[780]: fetched base config from "system" Jan 13 20:22:03.577546 ignition[780]: fetch: fetch complete Jan 13 20:22:03.577167 unknown[780]: fetched user config from "hetzner" Jan 13 20:22:03.577553 ignition[780]: fetch: fetch passed Jan 13 20:22:03.579532 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:22:03.577602 ignition[780]: Ignition finished successfully Jan 13 20:22:03.592531 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:22:03.605343 ignition[787]: Ignition 2.20.0 Jan 13 20:22:03.605354 ignition[787]: Stage: kargs Jan 13 20:22:03.605595 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:03.605606 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:03.606485 ignition[787]: kargs: kargs passed Jan 13 20:22:03.606535 ignition[787]: Ignition finished successfully Jan 13 20:22:03.610348 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:22:03.617429 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:22:03.631096 ignition[795]: Ignition 2.20.0 Jan 13 20:22:03.631675 ignition[795]: Stage: disks Jan 13 20:22:03.631877 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:03.631888 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:03.632614 ignition[795]: disks: disks passed Jan 13 20:22:03.634147 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:22:03.632656 ignition[795]: Ignition finished successfully Jan 13 20:22:03.635577 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:22:03.636347 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:22:03.637654 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:22:03.639099 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:22:03.640568 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:22:03.649598 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:22:03.666412 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:22:03.671548 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:22:03.677402 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:22:03.720335 kernel: EXT4-fs (sda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:22:03.722458 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:22:03.725303 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:22:03.737055 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:22:03.741280 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:22:03.745516 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:22:03.746559 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:22:03.746621 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:22:03.752296 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (811) Jan 13 20:22:03.754520 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:03.754564 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:03.755283 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:22:03.760585 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:22:03.762426 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:22:03.762451 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:22:03.764187 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:22:03.768781 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:22:03.818909 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:22:03.826080 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:22:03.830727 coreos-metadata[813]: Jan 13 20:22:03.830 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 13 20:22:03.833382 coreos-metadata[813]: Jan 13 20:22:03.833 INFO Fetch successful Jan 13 20:22:03.833382 coreos-metadata[813]: Jan 13 20:22:03.833 INFO wrote hostname ci-4152-2-0-1-13e84f5c35 to /sysroot/etc/hostname Jan 13 20:22:03.837175 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:22:03.838696 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:22:03.842837 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:22:03.947384 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:22:03.953503 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:22:03.956461 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:22:03.970362 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:03.986967 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:22:04.001587 ignition[928]: INFO : Ignition 2.20.0 Jan 13 20:22:04.002391 ignition[928]: INFO : Stage: mount Jan 13 20:22:04.002889 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:04.002889 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:04.004408 ignition[928]: INFO : mount: mount passed Jan 13 20:22:04.004408 ignition[928]: INFO : Ignition finished successfully Jan 13 20:22:04.005205 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:22:04.009418 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:22:04.138843 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:22:04.146656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:22:04.155885 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (939) Jan 13 20:22:04.155962 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:04.156695 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:04.156779 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:22:04.160316 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:22:04.160383 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:22:04.163526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:22:04.186172 ignition[956]: INFO : Ignition 2.20.0 Jan 13 20:22:04.186172 ignition[956]: INFO : Stage: files Jan 13 20:22:04.187458 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:04.187458 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:04.187458 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:22:04.190334 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:22:04.190334 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:22:04.192154 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:22:04.193065 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:22:04.194237 unknown[956]: wrote ssh authorized keys file for user: core Jan 13 20:22:04.195069 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:22:04.199317 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:22:04.199317 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:22:04.199317 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:22:04.204677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:22:04.204677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:22:04.204677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:22:04.204677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:22:04.204677 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:22:04.482454 systemd-networkd[778]: eth0: Gained IPv6LL Jan 13 20:22:04.674595 systemd-networkd[778]: eth1: Gained IPv6LL Jan 13 20:22:04.790973 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:22:05.636044 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:22:05.636044 ignition[956]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 20:22:05.639294 ignition[956]: INFO : files: op(7): op(8): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:22:05.641524 ignition[956]: INFO : files: op(7): op(8): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:22:05.641524 ignition[956]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 20:22:05.641524 ignition[956]: INFO : files: createResultFile: createFiles: op(9): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:22:05.641524 ignition[956]: INFO : files: createResultFile: createFiles: op(9): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:22:05.641524 ignition[956]: INFO : files: files passed Jan 13 20:22:05.641524 ignition[956]: INFO : Ignition finished successfully Jan 13 20:22:05.643510 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:22:05.651565 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:22:05.654151 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:22:05.655894 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:22:05.656624 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:22:05.673745 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:05.673745 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:05.677171 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:05.678314 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:22:05.679430 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:22:05.685568 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:22:05.714465 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:22:05.714584 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:22:05.716489 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:22:05.717129 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:22:05.718288 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:22:05.723454 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:22:05.736700 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:22:05.743544 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:22:05.758183 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:05.758952 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:05.760811 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:22:05.761809 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:22:05.761937 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:22:05.763404 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:22:05.764011 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:22:05.765056 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:22:05.766036 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:22:05.767005 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:22:05.768023 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:22:05.769030 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:22:05.770151 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:22:05.771098 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:22:05.772181 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:22:05.773014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:22:05.773136 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:22:05.774343 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:05.774965 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:05.775937 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:22:05.776009 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:05.776983 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:22:05.777097 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:22:05.778554 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:22:05.778666 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:22:05.779826 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:22:05.779914 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:22:05.780944 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:22:05.781035 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:22:05.787570 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:22:05.788072 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:22:05.788195 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:05.792626 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:22:05.793212 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:22:05.793382 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:05.795057 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:22:05.795380 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:22:05.806093 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:22:05.806787 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:22:05.810330 ignition[1009]: INFO : Ignition 2.20.0 Jan 13 20:22:05.810330 ignition[1009]: INFO : Stage: umount Jan 13 20:22:05.810330 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:05.810330 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:22:05.814387 ignition[1009]: INFO : umount: umount passed Jan 13 20:22:05.814387 ignition[1009]: INFO : Ignition finished successfully Jan 13 20:22:05.815726 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:22:05.815859 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:22:05.819779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:22:05.820242 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:22:05.821432 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:22:05.822612 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:22:05.822656 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:22:05.823776 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:22:05.823815 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:22:05.825924 systemd[1]: Stopped target network.target - Network. Jan 13 20:22:05.826393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:22:05.826443 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:22:05.827863 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:22:05.829091 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:22:05.832310 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:05.832952 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:22:05.834225 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:22:05.835315 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:22:05.835371 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:22:05.836416 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:22:05.836461 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:22:05.837417 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:22:05.837477 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:22:05.838499 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:22:05.838548 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:22:05.839993 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:22:05.840865 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:22:05.843411 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:22:05.843496 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:22:05.844518 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:22:05.844622 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:22:05.845510 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 13 20:22:05.848357 systemd-networkd[778]: eth1: DHCPv6 lease lost Jan 13 20:22:05.850811 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:22:05.851020 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:22:05.853638 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:22:05.853699 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:05.860414 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:22:05.860985 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:22:05.861057 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:22:05.863149 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:05.864218 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:22:05.864346 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:22:05.877248 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:22:05.877424 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:05.879335 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:22:05.879411 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:05.880618 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:22:05.880671 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:05.882213 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:22:05.883450 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:22:05.885418 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:22:05.885575 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:05.888155 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:22:05.888200 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:05.888869 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:22:05.888897 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:05.889878 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:22:05.889922 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:22:05.891466 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:22:05.891509 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:22:05.892561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:22:05.892608 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:05.898518 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:22:05.899084 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:22:05.899143 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:05.900900 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:05.900943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:05.909529 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:22:05.909637 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:22:05.911121 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:22:05.921601 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:22:05.931366 systemd[1]: Switching root. Jan 13 20:22:05.966061 systemd-journald[238]: Journal stopped Jan 13 20:22:06.856645 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 20:22:06.856734 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:22:06.856748 kernel: SELinux: policy capability open_perms=1 Jan 13 20:22:06.856758 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:22:06.856771 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:22:06.856780 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:22:06.856793 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:22:06.856802 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:22:06.856812 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:22:06.856826 kernel: audit: type=1403 audit(1736799726.133:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:22:06.856837 systemd[1]: Successfully loaded SELinux policy in 35.040ms. Jan 13 20:22:06.856852 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.788ms. Jan 13 20:22:06.856865 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:22:06.856876 systemd[1]: Detected virtualization kvm. Jan 13 20:22:06.856886 systemd[1]: Detected architecture arm64. Jan 13 20:22:06.856896 systemd[1]: Detected first boot. Jan 13 20:22:06.856907 systemd[1]: Hostname set to . Jan 13 20:22:06.856917 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:22:06.856928 zram_generator::config[1051]: No configuration found. Jan 13 20:22:06.856939 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:22:06.856951 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:22:06.856961 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:22:06.856971 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:22:06.856986 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:22:06.856997 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:22:06.857007 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:22:06.857018 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:22:06.857028 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:22:06.857039 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:22:06.857051 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:22:06.857061 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:22:06.857072 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:06.857082 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:06.857093 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:22:06.857103 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:22:06.857114 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:22:06.857124 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:22:06.857134 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:22:06.857146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:06.857156 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:22:06.857167 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:22:06.857177 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:22:06.857187 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:22:06.857201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:06.857212 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:22:06.857224 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:22:06.857234 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:22:06.857245 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:22:06.857255 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:22:06.857276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:06.857287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:06.857297 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:06.857307 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:22:06.857319 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:22:06.857329 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:22:06.857339 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:22:06.857349 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:22:06.857359 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:22:06.857370 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:22:06.857381 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:22:06.857391 systemd[1]: Reached target machines.target - Containers. Jan 13 20:22:06.857401 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:22:06.857413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:06.857426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:22:06.857437 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:22:06.857447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:06.857457 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:22:06.857468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:06.857478 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:22:06.857489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:06.857500 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:22:06.857511 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:22:06.857521 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:22:06.857531 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:22:06.857542 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:22:06.857552 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:22:06.857562 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:22:06.857572 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:22:06.857583 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:22:06.857594 kernel: fuse: init (API version 7.39) Jan 13 20:22:06.857604 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:22:06.857616 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:22:06.857628 systemd[1]: Stopped verity-setup.service. Jan 13 20:22:06.857640 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:22:06.857651 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:22:06.857662 kernel: loop: module loaded Jan 13 20:22:06.857672 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:22:06.857682 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:22:06.857768 systemd-journald[1118]: Collecting audit messages is disabled. Jan 13 20:22:06.857799 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:22:06.857811 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:22:06.857822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:06.857832 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:22:06.857842 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:22:06.857852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:06.857862 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:06.857873 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:06.857885 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:06.857895 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:22:06.857905 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:22:06.857917 systemd-journald[1118]: Journal started Jan 13 20:22:06.857941 systemd-journald[1118]: Runtime Journal (/run/log/journal/f747c47a4ce84d03b57ba9ee3a76191c) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:22:06.618871 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:22:06.865823 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:22:06.643769 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:22:06.644307 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:22:06.859684 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:06.859897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:06.860734 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:06.861534 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:22:06.862456 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:22:06.863889 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:22:06.870407 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:22:06.876455 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:22:06.877162 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:22:06.877197 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:22:06.882630 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:22:06.887461 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:22:06.893484 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:22:06.894636 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:06.896299 kernel: ACPI: bus type drm_connector registered Jan 13 20:22:06.897039 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:22:06.898934 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:22:06.900356 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:22:06.901613 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:22:06.903369 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:22:06.906463 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:22:06.910194 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:22:06.913825 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:22:06.913985 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:22:06.914870 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:22:06.915626 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:22:06.918280 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:22:06.919225 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:22:06.947511 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:22:06.958398 systemd-journald[1118]: Time spent on flushing to /var/log/journal/f747c47a4ce84d03b57ba9ee3a76191c is 103.191ms for 1107 entries. Jan 13 20:22:06.958398 systemd-journald[1118]: System Journal (/var/log/journal/f747c47a4ce84d03b57ba9ee3a76191c) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:22:07.079423 systemd-journald[1118]: Received client request to flush runtime journal. Jan 13 20:22:07.079473 kernel: loop0: detected capacity change from 0 to 194512 Jan 13 20:22:07.079497 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:22:07.079513 kernel: loop1: detected capacity change from 0 to 116808 Jan 13 20:22:06.979074 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:22:06.980132 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:22:06.991744 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:22:07.006193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:07.037320 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:07.051208 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:22:07.057783 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:22:07.065094 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:22:07.084308 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:22:07.085796 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:22:07.095301 kernel: loop2: detected capacity change from 0 to 8 Jan 13 20:22:07.097750 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:22:07.100073 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:22:07.120560 kernel: loop3: detected capacity change from 0 to 113536 Jan 13 20:22:07.121479 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 13 20:22:07.121494 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 13 20:22:07.127471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:07.175326 kernel: loop4: detected capacity change from 0 to 194512 Jan 13 20:22:07.192111 kernel: loop5: detected capacity change from 0 to 116808 Jan 13 20:22:07.213292 kernel: loop6: detected capacity change from 0 to 8 Jan 13 20:22:07.216288 kernel: loop7: detected capacity change from 0 to 113536 Jan 13 20:22:07.227798 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 13 20:22:07.228233 (sd-merge)[1192]: Merged extensions into '/usr'. Jan 13 20:22:07.234227 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:22:07.234247 systemd[1]: Reloading... Jan 13 20:22:07.363755 zram_generator::config[1219]: No configuration found. Jan 13 20:22:07.477427 ldconfig[1144]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:22:07.508573 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:22:07.554762 systemd[1]: Reloading finished in 320 ms. Jan 13 20:22:07.598295 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:22:07.601673 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:22:07.610503 systemd[1]: Starting ensure-sysext.service... Jan 13 20:22:07.619971 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:22:07.623423 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:22:07.623440 systemd[1]: Reloading... Jan 13 20:22:07.649037 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:22:07.649328 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:22:07.650017 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:22:07.650228 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 13 20:22:07.650349 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 13 20:22:07.657750 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:22:07.657764 systemd-tmpfiles[1256]: Skipping /boot Jan 13 20:22:07.672647 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:22:07.672663 systemd-tmpfiles[1256]: Skipping /boot Jan 13 20:22:07.694674 zram_generator::config[1283]: No configuration found. Jan 13 20:22:07.805670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:22:07.851178 systemd[1]: Reloading finished in 227 ms. Jan 13 20:22:07.873100 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:22:07.879731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:07.894736 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:22:07.898573 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:22:07.905435 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:22:07.913452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:22:07.921954 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:07.925048 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:22:07.928587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:07.933333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:07.937127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:07.940531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:07.941490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:07.947559 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:22:07.952155 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:07.952587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:07.957519 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:22:07.959885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:07.968756 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:22:07.970421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:07.972298 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:22:07.974151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:07.974618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:07.985067 systemd[1]: Finished ensure-sysext.service. Jan 13 20:22:07.986071 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Jan 13 20:22:07.988995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:07.990323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:07.996755 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:22:08.001499 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:22:08.002442 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:22:08.004738 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:22:08.004872 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:22:08.011038 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:08.013308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:08.016633 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:22:08.025337 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:22:08.039554 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:08.050467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:22:08.072157 augenrules[1373]: No rules Jan 13 20:22:08.074513 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:22:08.077666 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:22:08.077898 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:22:08.078912 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:22:08.082045 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:22:08.133780 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:22:08.219180 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:22:08.220213 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:22:08.237653 systemd-networkd[1362]: lo: Link UP Jan 13 20:22:08.237664 systemd-networkd[1362]: lo: Gained carrier Jan 13 20:22:08.240340 systemd-networkd[1362]: Enumeration completed Jan 13 20:22:08.240443 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:22:08.242280 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:22:08.244215 systemd-networkd[1362]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:08.244229 systemd-networkd[1362]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:08.245396 systemd-networkd[1362]: eth1: Link UP Jan 13 20:22:08.245409 systemd-networkd[1362]: eth1: Gained carrier Jan 13 20:22:08.245424 systemd-networkd[1362]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:08.258660 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:22:08.267337 systemd-resolved[1326]: Positive Trust Anchors: Jan 13 20:22:08.267419 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:22:08.267453 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:22:08.273515 systemd-resolved[1326]: Using system hostname 'ci-4152-2-0-1-13e84f5c35'. Jan 13 20:22:08.275358 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:22:08.276970 systemd[1]: Reached target network.target - Network. Jan 13 20:22:08.277649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:08.289332 systemd-networkd[1362]: eth1: DHCPv4 address 10.0.0.4/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:22:08.290663 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 13 20:22:08.310323 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 13 20:22:08.310444 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:22:08.319281 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 13 20:22:08.319338 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:22:08.319351 kernel: [drm] features: -context_init Jan 13 20:22:08.320572 kernel: [drm] number of scanouts: 1 Jan 13 20:22:08.320626 kernel: [drm] number of cap sets: 0 Jan 13 20:22:08.321821 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 13 20:22:08.326766 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:22:08.335289 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:22:08.336581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:22:08.340992 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:22:08.345578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:22:08.346434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:22:08.346494 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:22:08.350477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:22:08.352318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:22:08.355384 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:08.355395 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:08.356014 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 13 20:22:08.356455 systemd-networkd[1362]: eth0: Link UP Jan 13 20:22:08.356467 systemd-networkd[1362]: eth0: Gained carrier Jan 13 20:22:08.356484 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:08.361592 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 13 20:22:08.370406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:22:08.371411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:22:08.372571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:22:08.374607 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:22:08.374761 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:22:08.379865 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:22:08.386757 systemd-networkd[1362]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:08.412625 systemd-networkd[1362]: eth0: DHCPv4 address 138.199.153.209/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:22:08.413518 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 13 20:22:08.414534 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 13 20:22:08.416327 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1387) Jan 13 20:22:08.421677 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:08.447957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:08.449356 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:08.458396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:08.460780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:22:08.464445 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:22:08.490737 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:22:08.533066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:08.571012 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:22:08.578549 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:22:08.594036 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:22:08.621176 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:22:08.623380 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:08.624825 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:22:08.626398 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:22:08.627643 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:22:08.628573 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:22:08.629241 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:22:08.630046 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:22:08.630754 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:22:08.630789 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:22:08.631240 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:22:08.632851 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:22:08.634767 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:22:08.639401 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:22:08.641468 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:22:08.642631 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:22:08.643394 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:22:08.643948 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:22:08.644533 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:22:08.644567 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:22:08.647409 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:22:08.650558 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:22:08.656508 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:22:08.660502 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:22:08.662328 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:22:08.665416 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:22:08.665938 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:22:08.668476 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:22:08.670421 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 13 20:22:08.673812 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:22:08.675558 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:22:08.678350 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:22:08.679641 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:22:08.680097 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:22:08.682462 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:22:08.686508 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:22:08.702281 jq[1458]: true Jan 13 20:22:08.717393 jq[1448]: false Jan 13 20:22:08.718881 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:22:08.719076 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:22:08.736445 jq[1462]: true Jan 13 20:22:08.736859 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:22:08.737067 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:22:08.741682 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:22:08.772861 update_engine[1457]: I20250113 20:22:08.772336 1457 main.cc:92] Flatcar Update Engine starting Jan 13 20:22:08.774150 dbus-daemon[1447]: [system] SELinux support is enabled Jan 13 20:22:08.773986 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:22:08.774348 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:22:08.778395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:22:08.778431 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:22:08.779407 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:22:08.779433 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:22:08.787002 extend-filesystems[1449]: Found loop4 Jan 13 20:22:08.787002 extend-filesystems[1449]: Found loop5 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found loop6 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found loop7 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found sda Jan 13 20:22:08.805434 extend-filesystems[1449]: Found sda1 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found sda2 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found sda3 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found usr Jan 13 20:22:08.805434 extend-filesystems[1449]: Found sda4 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found sda6 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found sda7 Jan 13 20:22:08.805434 extend-filesystems[1449]: Found sda9 Jan 13 20:22:08.805434 extend-filesystems[1449]: Checking size of /dev/sda9 Jan 13 20:22:08.839482 coreos-metadata[1446]: Jan 13 20:22:08.804 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 13 20:22:08.839482 coreos-metadata[1446]: Jan 13 20:22:08.806 INFO Fetch successful Jan 13 20:22:08.839482 coreos-metadata[1446]: Jan 13 20:22:08.809 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 13 20:22:08.839482 coreos-metadata[1446]: Jan 13 20:22:08.809 INFO Fetch successful Jan 13 20:22:08.788444 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:22:08.839721 update_engine[1457]: I20250113 20:22:08.789656 1457 update_check_scheduler.cc:74] Next update check in 2m35s Jan 13 20:22:08.839793 extend-filesystems[1449]: Resized partition /dev/sda9 Jan 13 20:22:08.795496 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:22:08.850618 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:22:08.837647 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:22:08.840329 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:22:08.863608 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 13 20:22:08.889160 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:22:08.892630 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:22:08.905269 systemd-logind[1456]: New seat seat0. Jan 13 20:22:08.920462 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:22:08.921033 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 20:22:08.921240 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:22:08.949108 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1394) Jan 13 20:22:08.949187 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:22:08.955819 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:22:08.977577 systemd[1]: Starting sshkeys.service... Jan 13 20:22:08.997149 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:22:09.005667 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:22:09.047205 containerd[1466]: time="2025-01-13T20:22:09.042584680Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:22:09.052290 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 13 20:22:09.086790 extend-filesystems[1495]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:22:09.086790 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 13 20:22:09.086790 extend-filesystems[1495]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 13 20:22:09.094540 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Jan 13 20:22:09.094540 extend-filesystems[1449]: Found sr0 Jan 13 20:22:09.090858 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:22:09.092337 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:22:09.103797 coreos-metadata[1520]: Jan 13 20:22:09.103 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 13 20:22:09.104618 coreos-metadata[1520]: Jan 13 20:22:09.104 INFO Fetch successful Jan 13 20:22:09.106047 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:22:09.108463 unknown[1520]: wrote ssh authorized keys file for user: core Jan 13 20:22:09.109726 containerd[1466]: time="2025-01-13T20:22:09.109672280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.111541720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.111579480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.111597640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.111773840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.111792520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.111858240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.111869800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.112023920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.112039680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.112053560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:09.113429 containerd[1466]: time="2025-01-13T20:22:09.112063240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:09.114052 containerd[1466]: time="2025-01-13T20:22:09.112129800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:09.114052 containerd[1466]: time="2025-01-13T20:22:09.112345720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:22:09.114052 containerd[1466]: time="2025-01-13T20:22:09.112440760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:22:09.114052 containerd[1466]: time="2025-01-13T20:22:09.112453800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:22:09.114052 containerd[1466]: time="2025-01-13T20:22:09.112532760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:22:09.114052 containerd[1466]: time="2025-01-13T20:22:09.112571880Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.119572960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.119642960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.119665280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.119693360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.119714680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.119895200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.120149480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.120246800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.120307040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.120326200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.120341000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.120354920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.120367280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:22:09.120589 containerd[1466]: time="2025-01-13T20:22:09.120387240Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120402160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120414720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120426240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120437120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120457920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120471280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120483040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120497240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120508920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120521240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120532360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120545600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120559320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.120962 containerd[1466]: time="2025-01-13T20:22:09.120574680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120591920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120605840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120617880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120632400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120652360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120666680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120677320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120924600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120947760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120962120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120973640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120984240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.120996280Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:22:09.121191 containerd[1466]: time="2025-01-13T20:22:09.121005840Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:22:09.121437 containerd[1466]: time="2025-01-13T20:22:09.121015840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:22:09.123880 containerd[1466]: time="2025-01-13T20:22:09.122248320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:22:09.123880 containerd[1466]: time="2025-01-13T20:22:09.122380440Z" level=info msg="Connect containerd service" Jan 13 20:22:09.123880 containerd[1466]: time="2025-01-13T20:22:09.122428720Z" level=info msg="using legacy CRI server" Jan 13 20:22:09.123880 containerd[1466]: time="2025-01-13T20:22:09.122436280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:22:09.123880 containerd[1466]: time="2025-01-13T20:22:09.122711040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:22:09.123880 containerd[1466]: time="2025-01-13T20:22:09.123397960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:22:09.124362 containerd[1466]: time="2025-01-13T20:22:09.124336640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:22:09.124481 containerd[1466]: time="2025-01-13T20:22:09.124467640Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:22:09.124563 containerd[1466]: time="2025-01-13T20:22:09.124488560Z" level=info msg="Start subscribing containerd event" Jan 13 20:22:09.124649 containerd[1466]: time="2025-01-13T20:22:09.124636440Z" level=info msg="Start recovering state" Jan 13 20:22:09.124778 containerd[1466]: time="2025-01-13T20:22:09.124764600Z" level=info msg="Start event monitor" Jan 13 20:22:09.124831 containerd[1466]: time="2025-01-13T20:22:09.124820840Z" level=info msg="Start snapshots syncer" Jan 13 20:22:09.124908 containerd[1466]: time="2025-01-13T20:22:09.124894920Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:22:09.124957 containerd[1466]: time="2025-01-13T20:22:09.124946720Z" level=info msg="Start streaming server" Jan 13 20:22:09.125141 containerd[1466]: time="2025-01-13T20:22:09.125125560Z" level=info msg="containerd successfully booted in 0.085469s" Jan 13 20:22:09.126443 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:22:09.142785 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:22:09.144076 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:22:09.149664 systemd[1]: Finished sshkeys.service. Jan 13 20:22:09.602457 systemd-networkd[1362]: eth1: Gained IPv6LL Jan 13 20:22:09.603027 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 13 20:22:09.607869 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:22:09.610127 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:22:09.619377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:09.623429 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:22:09.669832 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:22:10.000790 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:22:10.021753 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:22:10.031574 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:22:10.039534 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:22:10.039780 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:22:10.047649 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:22:10.050441 systemd-networkd[1362]: eth0: Gained IPv6LL Jan 13 20:22:10.051084 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jan 13 20:22:10.059777 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:22:10.068775 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:22:10.072102 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:22:10.074523 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:22:10.374616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:10.376652 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:10.377360 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:22:10.378603 systemd[1]: Startup finished in 753ms (kernel) + 5.437s (initrd) + 4.280s (userspace) = 10.471s. Jan 13 20:22:11.017685 kubelet[1570]: E0113 20:22:11.017592 1570 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:11.023218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:11.023382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:21.274187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:22:21.286686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:21.383939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:21.389203 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:21.451537 kubelet[1590]: E0113 20:22:21.451472 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:21.455450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:21.455611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:31.706655 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:22:31.716742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:31.822473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:31.823315 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:31.873521 kubelet[1606]: E0113 20:22:31.873415 1606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:31.878011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:31.878316 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:40.400702 systemd-timesyncd[1348]: Contacted time server 144.76.139.8:123 (2.flatcar.pool.ntp.org). Jan 13 20:22:40.400820 systemd-timesyncd[1348]: Initial clock synchronization to Mon 2025-01-13 20:22:40.499495 UTC. Jan 13 20:22:42.130314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:22:42.147697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:42.262561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:42.263614 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:42.316781 kubelet[1622]: E0113 20:22:42.316708 1622 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:42.319930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:42.320266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:52.396578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:22:52.404583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:22:52.512637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:22:52.516703 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:22:52.563055 kubelet[1639]: E0113 20:22:52.562947 1639 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:22:52.565997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:22:52.566137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:22:53.914122 update_engine[1457]: I20250113 20:22:53.913959 1457 update_attempter.cc:509] Updating boot flags... Jan 13 20:22:53.961292 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1656) Jan 13 20:22:54.034742 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1660) Jan 13 20:23:02.646425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:23:02.660672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:02.761100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:02.775965 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:02.825083 kubelet[1673]: E0113 20:23:02.825031 1673 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:02.828136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:02.828432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:12.896583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 13 20:23:12.904627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:13.016757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:13.021215 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:13.067420 kubelet[1689]: E0113 20:23:13.067350 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:13.070680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:13.070945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:23.146402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 13 20:23:23.159596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:23.285606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:23.286629 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:23.339494 kubelet[1705]: E0113 20:23:23.339421 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:23.343091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:23.343590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:33.396518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 13 20:23:33.405515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:33.511686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:33.512246 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:33.566301 kubelet[1721]: E0113 20:23:33.566215 1721 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:33.569729 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:33.569956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:43.646514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 13 20:23:43.653637 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:43.775694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:43.781403 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:43.828108 kubelet[1738]: E0113 20:23:43.827995 1738 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:43.830775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:43.830960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:53.896644 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 13 20:23:53.902591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:54.015308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:54.020447 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:54.071944 kubelet[1753]: E0113 20:23:54.071786 1753 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:54.075451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:54.075595 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:24:02.108203 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:24:02.114611 systemd[1]: Started sshd@0-138.199.153.209:22-147.75.109.163:35230.service - OpenSSH per-connection server daemon (147.75.109.163:35230). Jan 13 20:24:03.109009 sshd[1762]: Accepted publickey for core from 147.75.109.163 port 35230 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:03.111294 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:03.122331 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:24:03.128942 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:24:03.131516 systemd-logind[1456]: New session 1 of user core. Jan 13 20:24:03.139136 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:24:03.145749 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:24:03.152172 (systemd)[1766]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:24:03.257379 systemd[1766]: Queued start job for default target default.target. Jan 13 20:24:03.269012 systemd[1766]: Created slice app.slice - User Application Slice. Jan 13 20:24:03.269313 systemd[1766]: Reached target paths.target - Paths. Jan 13 20:24:03.269483 systemd[1766]: Reached target timers.target - Timers. Jan 13 20:24:03.272027 systemd[1766]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:24:03.286029 systemd[1766]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:24:03.286197 systemd[1766]: Reached target sockets.target - Sockets. Jan 13 20:24:03.286221 systemd[1766]: Reached target basic.target - Basic System. Jan 13 20:24:03.286710 systemd[1766]: Reached target default.target - Main User Target. Jan 13 20:24:03.286955 systemd[1766]: Startup finished in 127ms. Jan 13 20:24:03.287228 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:24:03.296533 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:24:03.988571 systemd[1]: Started sshd@1-138.199.153.209:22-147.75.109.163:35232.service - OpenSSH per-connection server daemon (147.75.109.163:35232). Jan 13 20:24:04.146559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 13 20:24:04.153590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:24:04.268992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:04.273889 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:24:04.327722 kubelet[1787]: E0113 20:24:04.327628 1787 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:24:04.332402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:24:04.333059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:24:04.962315 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 35232 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:04.964171 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:04.971845 systemd-logind[1456]: New session 2 of user core. Jan 13 20:24:04.979616 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:24:05.638198 sshd[1795]: Connection closed by 147.75.109.163 port 35232 Jan 13 20:24:05.637202 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:05.641742 systemd[1]: sshd@1-138.199.153.209:22-147.75.109.163:35232.service: Deactivated successfully. Jan 13 20:24:05.643170 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:24:05.645506 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:24:05.646759 systemd-logind[1456]: Removed session 2. Jan 13 20:24:05.816659 systemd[1]: Started sshd@2-138.199.153.209:22-147.75.109.163:35238.service - OpenSSH per-connection server daemon (147.75.109.163:35238). Jan 13 20:24:06.798652 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 35238 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:06.800825 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:06.805661 systemd-logind[1456]: New session 3 of user core. Jan 13 20:24:06.817595 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:24:07.476300 sshd[1802]: Connection closed by 147.75.109.163 port 35238 Jan 13 20:24:07.477314 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:07.481798 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:24:07.482608 systemd[1]: sshd@2-138.199.153.209:22-147.75.109.163:35238.service: Deactivated successfully. Jan 13 20:24:07.485850 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:24:07.487771 systemd-logind[1456]: Removed session 3. Jan 13 20:24:07.656762 systemd[1]: Started sshd@3-138.199.153.209:22-147.75.109.163:43244.service - OpenSSH per-connection server daemon (147.75.109.163:43244). Jan 13 20:24:08.640967 sshd[1807]: Accepted publickey for core from 147.75.109.163 port 43244 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:08.643233 sshd-session[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:08.648964 systemd-logind[1456]: New session 4 of user core. Jan 13 20:24:08.654533 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:24:09.325526 sshd[1809]: Connection closed by 147.75.109.163 port 43244 Jan 13 20:24:09.324589 sshd-session[1807]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:09.328636 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:24:09.329105 systemd[1]: sshd@3-138.199.153.209:22-147.75.109.163:43244.service: Deactivated successfully. Jan 13 20:24:09.331067 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:24:09.333737 systemd-logind[1456]: Removed session 4. Jan 13 20:24:09.507702 systemd[1]: Started sshd@4-138.199.153.209:22-147.75.109.163:43260.service - OpenSSH per-connection server daemon (147.75.109.163:43260). Jan 13 20:24:10.488101 sshd[1814]: Accepted publickey for core from 147.75.109.163 port 43260 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:10.489898 sshd-session[1814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:10.497520 systemd-logind[1456]: New session 5 of user core. Jan 13 20:24:10.503599 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:24:11.014287 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:24:11.014599 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:24:11.036968 sudo[1817]: pam_unix(sudo:session): session closed for user root Jan 13 20:24:11.195383 sshd[1816]: Connection closed by 147.75.109.163 port 43260 Jan 13 20:24:11.196731 sshd-session[1814]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:11.202157 systemd[1]: sshd@4-138.199.153.209:22-147.75.109.163:43260.service: Deactivated successfully. Jan 13 20:24:11.204492 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:24:11.205525 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:24:11.207900 systemd-logind[1456]: Removed session 5. Jan 13 20:24:11.371704 systemd[1]: Started sshd@5-138.199.153.209:22-147.75.109.163:43262.service - OpenSSH per-connection server daemon (147.75.109.163:43262). Jan 13 20:24:12.375584 sshd[1822]: Accepted publickey for core from 147.75.109.163 port 43262 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:12.378067 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:12.384186 systemd-logind[1456]: New session 6 of user core. Jan 13 20:24:12.393658 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:24:12.900697 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:24:12.900995 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:24:12.906241 sudo[1826]: pam_unix(sudo:session): session closed for user root Jan 13 20:24:12.912574 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:24:12.912834 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:24:12.929870 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:24:12.959909 augenrules[1848]: No rules Jan 13 20:24:12.961210 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:24:12.961453 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:24:12.963171 sudo[1825]: pam_unix(sudo:session): session closed for user root Jan 13 20:24:13.122517 sshd[1824]: Connection closed by 147.75.109.163 port 43262 Jan 13 20:24:13.123295 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:13.127983 systemd[1]: sshd@5-138.199.153.209:22-147.75.109.163:43262.service: Deactivated successfully. Jan 13 20:24:13.129842 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:24:13.130624 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:24:13.131846 systemd-logind[1456]: Removed session 6. Jan 13 20:24:13.296550 systemd[1]: Started sshd@6-138.199.153.209:22-147.75.109.163:43272.service - OpenSSH per-connection server daemon (147.75.109.163:43272). Jan 13 20:24:14.281342 sshd[1856]: Accepted publickey for core from 147.75.109.163 port 43272 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:14.283046 sshd-session[1856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:14.289803 systemd-logind[1456]: New session 7 of user core. Jan 13 20:24:14.303070 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:24:14.396396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 13 20:24:14.402681 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:24:14.522469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:14.527859 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:24:14.580281 kubelet[1867]: E0113 20:24:14.580089 1867 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:24:14.583723 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:24:14.583996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:24:14.805891 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:24:14.806314 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:24:15.466600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:15.479823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:24:15.511529 systemd[1]: Reloading requested from client PID 1914 ('systemctl') (unit session-7.scope)... Jan 13 20:24:15.511678 systemd[1]: Reloading... Jan 13 20:24:15.610296 zram_generator::config[1950]: No configuration found. Jan 13 20:24:15.719230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:24:15.784970 systemd[1]: Reloading finished in 272 ms. Jan 13 20:24:15.834484 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:24:15.834592 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:24:15.834918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:15.844675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:24:15.951502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:24:15.955298 (kubelet)[2002]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:24:16.003895 kubelet[2002]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:24:16.003895 kubelet[2002]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:24:16.003895 kubelet[2002]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:24:16.004423 kubelet[2002]: I0113 20:24:16.003839 2002 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:24:16.519946 kubelet[2002]: I0113 20:24:16.519889 2002 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:24:16.519946 kubelet[2002]: I0113 20:24:16.519943 2002 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:24:16.520319 kubelet[2002]: I0113 20:24:16.520302 2002 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:24:16.538827 kubelet[2002]: I0113 20:24:16.538650 2002 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:24:16.554129 kubelet[2002]: I0113 20:24:16.553104 2002 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:24:16.554129 kubelet[2002]: I0113 20:24:16.553462 2002 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:24:16.554129 kubelet[2002]: I0113 20:24:16.553655 2002 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:24:16.554129 kubelet[2002]: I0113 20:24:16.553669 2002 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:24:16.554129 kubelet[2002]: I0113 20:24:16.553677 2002 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:24:16.554129 kubelet[2002]: I0113 20:24:16.553893 2002 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:24:16.557248 kubelet[2002]: I0113 20:24:16.557221 2002 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:24:16.557418 kubelet[2002]: I0113 20:24:16.557406 2002 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:24:16.558159 kubelet[2002]: I0113 20:24:16.558132 2002 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:24:16.558303 kubelet[2002]: I0113 20:24:16.558292 2002 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:24:16.558411 kubelet[2002]: E0113 20:24:16.558383 2002 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:16.558472 kubelet[2002]: E0113 20:24:16.558457 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:16.560769 kubelet[2002]: I0113 20:24:16.560744 2002 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:24:16.561352 kubelet[2002]: I0113 20:24:16.561335 2002 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:24:16.561468 kubelet[2002]: W0113 20:24:16.561454 2002 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:24:16.562432 kubelet[2002]: I0113 20:24:16.562411 2002 server.go:1256] "Started kubelet" Jan 13 20:24:16.563537 kubelet[2002]: I0113 20:24:16.562750 2002 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:24:16.563849 kubelet[2002]: I0113 20:24:16.563707 2002 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:24:16.566279 kubelet[2002]: I0113 20:24:16.565464 2002 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:24:16.566279 kubelet[2002]: I0113 20:24:16.565703 2002 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:24:16.567106 kubelet[2002]: I0113 20:24:16.567058 2002 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:24:16.571081 kubelet[2002]: E0113 20:24:16.571052 2002 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.181a5a448059717a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-01-13 20:24:16.56236889 +0000 UTC m=+0.603279692,LastTimestamp:2025-01-13 20:24:16.56236889 +0000 UTC m=+0.603279692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 13 20:24:16.579094 kubelet[2002]: E0113 20:24:16.579067 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:16.579321 kubelet[2002]: I0113 20:24:16.579309 2002 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:24:16.579738 kubelet[2002]: I0113 20:24:16.579685 2002 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:24:16.579908 kubelet[2002]: I0113 20:24:16.579896 2002 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:24:16.581836 kubelet[2002]: E0113 20:24:16.581801 2002 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:24:16.582439 kubelet[2002]: W0113 20:24:16.582420 2002 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:24:16.582635 kubelet[2002]: E0113 20:24:16.582601 2002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:24:16.582824 kubelet[2002]: W0113 20:24:16.582809 2002 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:24:16.582909 kubelet[2002]: E0113 20:24:16.582900 2002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.4" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:24:16.583814 kubelet[2002]: I0113 20:24:16.583791 2002 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:24:16.584013 kubelet[2002]: I0113 20:24:16.583992 2002 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:24:16.586611 kubelet[2002]: I0113 20:24:16.586595 2002 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:24:16.597513 kubelet[2002]: E0113 20:24:16.597486 2002 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.4\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 20:24:16.597859 kubelet[2002]: E0113 20:24:16.597841 2002 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.181a5a44818180e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-01-13 20:24:16.58177149 +0000 UTC m=+0.622682292,LastTimestamp:2025-01-13 20:24:16.58177149 +0000 UTC m=+0.622682292,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 13 20:24:16.598628 kubelet[2002]: W0113 20:24:16.598043 2002 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:24:16.598628 kubelet[2002]: E0113 20:24:16.598067 2002 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 20:24:16.600217 kubelet[2002]: I0113 20:24:16.600189 2002 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:24:16.600217 kubelet[2002]: I0113 20:24:16.600213 2002 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:24:16.600346 kubelet[2002]: I0113 20:24:16.600229 2002 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:24:16.602761 kubelet[2002]: I0113 20:24:16.602739 2002 policy_none.go:49] "None policy: Start" Jan 13 20:24:16.603235 kubelet[2002]: E0113 20:24:16.603208 2002 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.4.181a5a44827cb87b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.4,UID:10.0.0.4,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.4 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.4,},FirstTimestamp:2025-01-13 20:24:16.598235259 +0000 UTC m=+0.639146021,LastTimestamp:2025-01-13 20:24:16.598235259 +0000 UTC m=+0.639146021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.4,}" Jan 13 20:24:16.604713 kubelet[2002]: I0113 20:24:16.604361 2002 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:24:16.604713 kubelet[2002]: I0113 20:24:16.604404 2002 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:24:16.611067 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:24:16.625899 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:24:16.629636 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:24:16.635481 kubelet[2002]: I0113 20:24:16.635442 2002 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:24:16.637101 kubelet[2002]: I0113 20:24:16.636733 2002 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:24:16.637101 kubelet[2002]: I0113 20:24:16.636755 2002 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:24:16.637101 kubelet[2002]: I0113 20:24:16.636758 2002 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:24:16.637101 kubelet[2002]: I0113 20:24:16.636771 2002 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:24:16.637101 kubelet[2002]: E0113 20:24:16.636873 2002 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:24:16.637297 kubelet[2002]: I0113 20:24:16.637131 2002 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:24:16.642245 kubelet[2002]: E0113 20:24:16.642217 2002 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.4\" not found" Jan 13 20:24:16.681643 kubelet[2002]: I0113 20:24:16.681605 2002 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.4" Jan 13 20:24:16.691455 kubelet[2002]: I0113 20:24:16.691341 2002 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.4" Jan 13 20:24:16.707341 kubelet[2002]: E0113 20:24:16.707311 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:16.810403 kubelet[2002]: E0113 20:24:16.808556 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:16.909529 kubelet[2002]: E0113 20:24:16.909472 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.010210 kubelet[2002]: E0113 20:24:17.010132 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.111355 kubelet[2002]: E0113 20:24:17.111150 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.212198 kubelet[2002]: E0113 20:24:17.212123 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.313168 kubelet[2002]: E0113 20:24:17.313055 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.413875 kubelet[2002]: E0113 20:24:17.413809 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.491424 sudo[1875]: pam_unix(sudo:session): session closed for user root Jan 13 20:24:17.514604 kubelet[2002]: E0113 20:24:17.514548 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.524067 kubelet[2002]: I0113 20:24:17.523771 2002 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:24:17.524067 kubelet[2002]: W0113 20:24:17.524024 2002 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:24:17.559253 kubelet[2002]: E0113 20:24:17.559203 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:17.615221 kubelet[2002]: E0113 20:24:17.615152 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.650984 sshd[1858]: Connection closed by 147.75.109.163 port 43272 Jan 13 20:24:17.652112 sshd-session[1856]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:17.657593 systemd[1]: sshd@6-138.199.153.209:22-147.75.109.163:43272.service: Deactivated successfully. Jan 13 20:24:17.659546 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:24:17.661016 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:24:17.662568 systemd-logind[1456]: Removed session 7. Jan 13 20:24:17.715598 kubelet[2002]: E0113 20:24:17.715353 2002 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.4\" not found" Jan 13 20:24:17.816921 kubelet[2002]: I0113 20:24:17.816885 2002 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:24:17.817415 containerd[1466]: time="2025-01-13T20:24:17.817365341Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:24:17.817849 kubelet[2002]: I0113 20:24:17.817630 2002 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:24:18.560171 kubelet[2002]: E0113 20:24:18.560107 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:18.560171 kubelet[2002]: I0113 20:24:18.560119 2002 apiserver.go:52] "Watching apiserver" Jan 13 20:24:18.566308 kubelet[2002]: I0113 20:24:18.566270 2002 topology_manager.go:215] "Topology Admit Handler" podUID="bc81fd54-24db-4a81-bf2d-d5fe49e7995e" podNamespace="calico-system" podName="calico-node-qkncz" Jan 13 20:24:18.566448 kubelet[2002]: I0113 20:24:18.566392 2002 topology_manager.go:215] "Topology Admit Handler" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" podNamespace="calico-system" podName="csi-node-driver-2b7r8" Jan 13 20:24:18.566478 kubelet[2002]: I0113 20:24:18.566469 2002 topology_manager.go:215] "Topology Admit Handler" podUID="4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5" podNamespace="kube-system" podName="kube-proxy-4nwv9" Jan 13 20:24:18.568473 kubelet[2002]: E0113 20:24:18.566838 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:18.573223 systemd[1]: Created slice kubepods-besteffort-pod4c2f98a8_b1ba_45b1_9654_ee0e6c445fb5.slice - libcontainer container kubepods-besteffort-pod4c2f98a8_b1ba_45b1_9654_ee0e6c445fb5.slice. Jan 13 20:24:18.580638 kubelet[2002]: I0113 20:24:18.580533 2002 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:24:18.586431 systemd[1]: Created slice kubepods-besteffort-podbc81fd54_24db_4a81_bf2d_d5fe49e7995e.slice - libcontainer container kubepods-besteffort-podbc81fd54_24db_4a81_bf2d_d5fe49e7995e.slice. Jan 13 20:24:18.590829 kubelet[2002]: I0113 20:24:18.590323 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-lib-modules\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.590829 kubelet[2002]: I0113 20:24:18.590404 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-var-run-calico\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.590829 kubelet[2002]: I0113 20:24:18.590461 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/213cfc76-d65d-488d-bb43-a25e993a2250-socket-dir\") pod \"csi-node-driver-2b7r8\" (UID: \"213cfc76-d65d-488d-bb43-a25e993a2250\") " pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:18.590829 kubelet[2002]: I0113 20:24:18.590521 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2jm6\" (UniqueName: \"kubernetes.io/projected/213cfc76-d65d-488d-bb43-a25e993a2250-kube-api-access-k2jm6\") pod \"csi-node-driver-2b7r8\" (UID: \"213cfc76-d65d-488d-bb43-a25e993a2250\") " pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:18.590829 kubelet[2002]: I0113 20:24:18.590576 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5-kube-proxy\") pod \"kube-proxy-4nwv9\" (UID: \"4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5\") " pod="kube-system/kube-proxy-4nwv9" Jan 13 20:24:18.591060 kubelet[2002]: I0113 20:24:18.590620 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp62z\" (UniqueName: \"kubernetes.io/projected/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-kube-api-access-vp62z\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.591060 kubelet[2002]: I0113 20:24:18.590660 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/213cfc76-d65d-488d-bb43-a25e993a2250-registration-dir\") pod \"csi-node-driver-2b7r8\" (UID: \"213cfc76-d65d-488d-bb43-a25e993a2250\") " pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:18.591060 kubelet[2002]: I0113 20:24:18.590698 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-policysync\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.591060 kubelet[2002]: I0113 20:24:18.590748 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-tigera-ca-bundle\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.591951 kubelet[2002]: I0113 20:24:18.591894 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-node-certs\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.592096 kubelet[2002]: I0113 20:24:18.592030 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/213cfc76-d65d-488d-bb43-a25e993a2250-varrun\") pod \"csi-node-driver-2b7r8\" (UID: \"213cfc76-d65d-488d-bb43-a25e993a2250\") " pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:18.592096 kubelet[2002]: I0113 20:24:18.592086 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/213cfc76-d65d-488d-bb43-a25e993a2250-kubelet-dir\") pod \"csi-node-driver-2b7r8\" (UID: \"213cfc76-d65d-488d-bb43-a25e993a2250\") " pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:18.592151 kubelet[2002]: I0113 20:24:18.592138 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5-lib-modules\") pod \"kube-proxy-4nwv9\" (UID: \"4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5\") " pod="kube-system/kube-proxy-4nwv9" Jan 13 20:24:18.592275 kubelet[2002]: I0113 20:24:18.592183 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5-xtables-lock\") pod \"kube-proxy-4nwv9\" (UID: \"4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5\") " pod="kube-system/kube-proxy-4nwv9" Jan 13 20:24:18.592375 kubelet[2002]: I0113 20:24:18.592306 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwmfk\" (UniqueName: \"kubernetes.io/projected/4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5-kube-api-access-vwmfk\") pod \"kube-proxy-4nwv9\" (UID: \"4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5\") " pod="kube-system/kube-proxy-4nwv9" Jan 13 20:24:18.592375 kubelet[2002]: I0113 20:24:18.592364 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-xtables-lock\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.593324 kubelet[2002]: I0113 20:24:18.592488 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-var-lib-calico\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.593324 kubelet[2002]: I0113 20:24:18.592551 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-cni-bin-dir\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.593324 kubelet[2002]: I0113 20:24:18.592587 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-cni-net-dir\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.593324 kubelet[2002]: I0113 20:24:18.592616 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-cni-log-dir\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.593324 kubelet[2002]: I0113 20:24:18.592644 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bc81fd54-24db-4a81-bf2d-d5fe49e7995e-flexvol-driver-host\") pod \"calico-node-qkncz\" (UID: \"bc81fd54-24db-4a81-bf2d-d5fe49e7995e\") " pod="calico-system/calico-node-qkncz" Jan 13 20:24:18.697187 kubelet[2002]: E0113 20:24:18.697124 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.697187 kubelet[2002]: W0113 20:24:18.697154 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.697187 kubelet[2002]: E0113 20:24:18.697176 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.697405 kubelet[2002]: E0113 20:24:18.697375 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.697405 kubelet[2002]: W0113 20:24:18.697383 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.697405 kubelet[2002]: E0113 20:24:18.697396 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.697550 kubelet[2002]: E0113 20:24:18.697533 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.697550 kubelet[2002]: W0113 20:24:18.697545 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.697687 kubelet[2002]: E0113 20:24:18.697668 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.697687 kubelet[2002]: W0113 20:24:18.697687 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.697878 kubelet[2002]: E0113 20:24:18.697848 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.697878 kubelet[2002]: W0113 20:24:18.697860 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.697878 kubelet[2002]: E0113 20:24:18.697872 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.698637 kubelet[2002]: E0113 20:24:18.698143 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.698637 kubelet[2002]: W0113 20:24:18.698418 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.698637 kubelet[2002]: E0113 20:24:18.698437 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.698637 kubelet[2002]: E0113 20:24:18.698605 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.698637 kubelet[2002]: W0113 20:24:18.698614 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.698637 kubelet[2002]: E0113 20:24:18.698627 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.698857 kubelet[2002]: E0113 20:24:18.698807 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.698857 kubelet[2002]: W0113 20:24:18.698822 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.698857 kubelet[2002]: E0113 20:24:18.698835 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.698933 kubelet[2002]: E0113 20:24:18.698874 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.699030 kubelet[2002]: E0113 20:24:18.699014 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.699168 kubelet[2002]: E0113 20:24:18.699142 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.699168 kubelet[2002]: W0113 20:24:18.699160 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.699224 kubelet[2002]: E0113 20:24:18.699193 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.699615 kubelet[2002]: E0113 20:24:18.699588 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.699615 kubelet[2002]: W0113 20:24:18.699606 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.699695 kubelet[2002]: E0113 20:24:18.699625 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.699911 kubelet[2002]: E0113 20:24:18.699897 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.699911 kubelet[2002]: W0113 20:24:18.699910 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.699989 kubelet[2002]: E0113 20:24:18.699927 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.700438 kubelet[2002]: E0113 20:24:18.700422 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.700438 kubelet[2002]: W0113 20:24:18.700437 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.700566 kubelet[2002]: E0113 20:24:18.700550 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.700825 kubelet[2002]: E0113 20:24:18.700770 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.700825 kubelet[2002]: W0113 20:24:18.700783 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.700825 kubelet[2002]: E0113 20:24:18.700800 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.703901 kubelet[2002]: E0113 20:24:18.703884 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.704688 kubelet[2002]: W0113 20:24:18.704323 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.704688 kubelet[2002]: E0113 20:24:18.704350 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.709966 kubelet[2002]: E0113 20:24:18.709874 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.709966 kubelet[2002]: W0113 20:24:18.709897 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.709966 kubelet[2002]: E0113 20:24:18.709928 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.713523 kubelet[2002]: E0113 20:24:18.713506 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.713616 kubelet[2002]: W0113 20:24:18.713603 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.713693 kubelet[2002]: E0113 20:24:18.713662 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.715616 kubelet[2002]: E0113 20:24:18.715428 2002 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:24:18.715616 kubelet[2002]: W0113 20:24:18.715450 2002 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:24:18.715616 kubelet[2002]: E0113 20:24:18.715468 2002 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:24:18.884654 containerd[1466]: time="2025-01-13T20:24:18.884465679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4nwv9,Uid:4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5,Namespace:kube-system,Attempt:0,}" Jan 13 20:24:18.890203 containerd[1466]: time="2025-01-13T20:24:18.890146625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qkncz,Uid:bc81fd54-24db-4a81-bf2d-d5fe49e7995e,Namespace:calico-system,Attempt:0,}" Jan 13 20:24:19.476338 containerd[1466]: time="2025-01-13T20:24:19.475678086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:24:19.477554 containerd[1466]: time="2025-01-13T20:24:19.477462131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 13 20:24:19.481198 containerd[1466]: time="2025-01-13T20:24:19.480947819Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:24:19.483022 containerd[1466]: time="2025-01-13T20:24:19.482822706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:24:19.483091 containerd[1466]: time="2025-01-13T20:24:19.483066832Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:24:19.484547 containerd[1466]: time="2025-01-13T20:24:19.484497749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:24:19.488906 containerd[1466]: time="2025-01-13T20:24:19.488396927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 598.13506ms" Jan 13 20:24:19.490714 containerd[1466]: time="2025-01-13T20:24:19.490683345Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.863992ms" Jan 13 20:24:19.560770 kubelet[2002]: E0113 20:24:19.560724 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:19.609847 containerd[1466]: time="2025-01-13T20:24:19.609279624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:19.609847 containerd[1466]: time="2025-01-13T20:24:19.609402228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:19.609847 containerd[1466]: time="2025-01-13T20:24:19.609419188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:19.610090 containerd[1466]: time="2025-01-13T20:24:19.610020963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:19.610090 containerd[1466]: time="2025-01-13T20:24:19.610070884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:19.610169 containerd[1466]: time="2025-01-13T20:24:19.610091965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:19.610207 containerd[1466]: time="2025-01-13T20:24:19.610180527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:19.610839 containerd[1466]: time="2025-01-13T20:24:19.609505390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:19.637662 kubelet[2002]: E0113 20:24:19.637622 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:19.684638 systemd[1]: Started cri-containerd-22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b.scope - libcontainer container 22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b. Jan 13 20:24:19.689219 systemd[1]: Started cri-containerd-d881bb7098642b977ad1be49ee00a42fc8d33a81f06fdd8ff5552d2fc889cca9.scope - libcontainer container d881bb7098642b977ad1be49ee00a42fc8d33a81f06fdd8ff5552d2fc889cca9. Jan 13 20:24:19.713297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2130610134.mount: Deactivated successfully. Jan 13 20:24:19.729848 containerd[1466]: time="2025-01-13T20:24:19.729284580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4nwv9,Uid:4c2f98a8-b1ba-45b1-9654-ee0e6c445fb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d881bb7098642b977ad1be49ee00a42fc8d33a81f06fdd8ff5552d2fc889cca9\"" Jan 13 20:24:19.732048 containerd[1466]: time="2025-01-13T20:24:19.731834004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qkncz,Uid:bc81fd54-24db-4a81-bf2d-d5fe49e7995e,Namespace:calico-system,Attempt:0,} returns sandbox id \"22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b\"" Jan 13 20:24:19.734914 containerd[1466]: time="2025-01-13T20:24:19.734673156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:24:20.561456 kubelet[2002]: E0113 20:24:20.561357 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:21.009742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096832470.mount: Deactivated successfully. Jan 13 20:24:21.097805 containerd[1466]: time="2025-01-13T20:24:21.096862034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:21.098289 containerd[1466]: time="2025-01-13T20:24:21.098231388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 13 20:24:21.098816 containerd[1466]: time="2025-01-13T20:24:21.098789361Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:21.103542 containerd[1466]: time="2025-01-13T20:24:21.103498196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:21.105087 containerd[1466]: time="2025-01-13T20:24:21.104522941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.369788584s" Jan 13 20:24:21.105087 containerd[1466]: time="2025-01-13T20:24:21.104558622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 20:24:21.106202 containerd[1466]: time="2025-01-13T20:24:21.105468204Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:24:21.106655 containerd[1466]: time="2025-01-13T20:24:21.106527790Z" level=info msg="CreateContainer within sandbox \"22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:24:21.122962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3218392860.mount: Deactivated successfully. Jan 13 20:24:21.127248 containerd[1466]: time="2025-01-13T20:24:21.127208935Z" level=info msg="CreateContainer within sandbox \"22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798\"" Jan 13 20:24:21.128317 containerd[1466]: time="2025-01-13T20:24:21.128289921Z" level=info msg="StartContainer for \"21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798\"" Jan 13 20:24:21.160628 systemd[1]: Started cri-containerd-21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798.scope - libcontainer container 21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798. Jan 13 20:24:21.190728 containerd[1466]: time="2025-01-13T20:24:21.190683924Z" level=info msg="StartContainer for \"21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798\" returns successfully" Jan 13 20:24:21.210542 systemd[1]: cri-containerd-21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798.scope: Deactivated successfully. Jan 13 20:24:21.256303 containerd[1466]: time="2025-01-13T20:24:21.256161763Z" level=info msg="shim disconnected" id=21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798 namespace=k8s.io Jan 13 20:24:21.256570 containerd[1466]: time="2025-01-13T20:24:21.256256685Z" level=warning msg="cleaning up after shim disconnected" id=21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798 namespace=k8s.io Jan 13 20:24:21.256570 containerd[1466]: time="2025-01-13T20:24:21.256338327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:21.562168 kubelet[2002]: E0113 20:24:21.562071 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:21.638272 kubelet[2002]: E0113 20:24:21.638197 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:21.980139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21a1d1ae42c4ed828918e9b0c8480ee205955af6a64b8557c00ef042dde6c798-rootfs.mount: Deactivated successfully. Jan 13 20:24:22.121932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129146683.mount: Deactivated successfully. Jan 13 20:24:22.387132 containerd[1466]: time="2025-01-13T20:24:22.386930002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:22.394656 containerd[1466]: time="2025-01-13T20:24:22.394571025Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25274003" Jan 13 20:24:22.395898 containerd[1466]: time="2025-01-13T20:24:22.395295202Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:22.398505 containerd[1466]: time="2025-01-13T20:24:22.398327395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:22.398960 containerd[1466]: time="2025-01-13T20:24:22.398931930Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.293433404s" Jan 13 20:24:22.398960 containerd[1466]: time="2025-01-13T20:24:22.398959450Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:24:22.400307 containerd[1466]: time="2025-01-13T20:24:22.400274322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:24:22.401477 containerd[1466]: time="2025-01-13T20:24:22.401290306Z" level=info msg="CreateContainer within sandbox \"d881bb7098642b977ad1be49ee00a42fc8d33a81f06fdd8ff5552d2fc889cca9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:24:22.425901 containerd[1466]: time="2025-01-13T20:24:22.425829855Z" level=info msg="CreateContainer within sandbox \"d881bb7098642b977ad1be49ee00a42fc8d33a81f06fdd8ff5552d2fc889cca9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc9468cbfe11ebc9a937ff2191dbe0ff836fc1f39fdd65b9d968d29a42804d44\"" Jan 13 20:24:22.429281 containerd[1466]: time="2025-01-13T20:24:22.427231408Z" level=info msg="StartContainer for \"bc9468cbfe11ebc9a937ff2191dbe0ff836fc1f39fdd65b9d968d29a42804d44\"" Jan 13 20:24:22.456786 systemd[1]: Started cri-containerd-bc9468cbfe11ebc9a937ff2191dbe0ff836fc1f39fdd65b9d968d29a42804d44.scope - libcontainer container bc9468cbfe11ebc9a937ff2191dbe0ff836fc1f39fdd65b9d968d29a42804d44. Jan 13 20:24:22.488678 containerd[1466]: time="2025-01-13T20:24:22.488553839Z" level=info msg="StartContainer for \"bc9468cbfe11ebc9a937ff2191dbe0ff836fc1f39fdd65b9d968d29a42804d44\" returns successfully" Jan 13 20:24:22.562441 kubelet[2002]: E0113 20:24:22.562385 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:22.674599 kubelet[2002]: I0113 20:24:22.674486 2002 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4nwv9" podStartSLOduration=4.009659953 podStartE2EDuration="6.674407298s" podCreationTimestamp="2025-01-13 20:24:16 +0000 UTC" firstStartedPulling="2025-01-13 20:24:19.734595074 +0000 UTC m=+3.775505876" lastFinishedPulling="2025-01-13 20:24:22.399342419 +0000 UTC m=+6.440253221" observedRunningTime="2025-01-13 20:24:22.674184373 +0000 UTC m=+6.715095175" watchObservedRunningTime="2025-01-13 20:24:22.674407298 +0000 UTC m=+6.715318140" Jan 13 20:24:23.563302 kubelet[2002]: E0113 20:24:23.563201 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:23.637383 kubelet[2002]: E0113 20:24:23.637178 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:24.563882 kubelet[2002]: E0113 20:24:24.563832 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:25.014627 containerd[1466]: time="2025-01-13T20:24:25.014559091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:25.015952 containerd[1466]: time="2025-01-13T20:24:25.015884881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 20:24:25.017299 containerd[1466]: time="2025-01-13T20:24:25.016936425Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:25.024305 containerd[1466]: time="2025-01-13T20:24:25.022982243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:25.024481 containerd[1466]: time="2025-01-13T20:24:25.024347794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.623933029s" Jan 13 20:24:25.024481 containerd[1466]: time="2025-01-13T20:24:25.024382515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 20:24:25.027680 containerd[1466]: time="2025-01-13T20:24:25.027646389Z" level=info msg="CreateContainer within sandbox \"22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:24:25.044144 containerd[1466]: time="2025-01-13T20:24:25.044039843Z" level=info msg="CreateContainer within sandbox \"22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f\"" Jan 13 20:24:25.045149 containerd[1466]: time="2025-01-13T20:24:25.045115228Z" level=info msg="StartContainer for \"fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f\"" Jan 13 20:24:25.075461 systemd[1]: Started cri-containerd-fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f.scope - libcontainer container fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f. Jan 13 20:24:25.109294 containerd[1466]: time="2025-01-13T20:24:25.109176569Z" level=info msg="StartContainer for \"fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f\" returns successfully" Jan 13 20:24:25.565419 kubelet[2002]: E0113 20:24:25.565370 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:25.583604 containerd[1466]: time="2025-01-13T20:24:25.583536147Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:24:25.587711 systemd[1]: cri-containerd-fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f.scope: Deactivated successfully. Jan 13 20:24:25.616438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f-rootfs.mount: Deactivated successfully. Jan 13 20:24:25.621853 kubelet[2002]: I0113 20:24:25.621803 2002 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:24:25.648556 systemd[1]: Created slice kubepods-besteffort-pod213cfc76_d65d_488d_bb43_a25e993a2250.slice - libcontainer container kubepods-besteffort-pod213cfc76_d65d_488d_bb43_a25e993a2250.slice. Jan 13 20:24:25.652839 containerd[1466]: time="2025-01-13T20:24:25.652780846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:0,}" Jan 13 20:24:25.823587 containerd[1466]: time="2025-01-13T20:24:25.823336655Z" level=info msg="shim disconnected" id=fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f namespace=k8s.io Jan 13 20:24:25.823587 containerd[1466]: time="2025-01-13T20:24:25.823493019Z" level=warning msg="cleaning up after shim disconnected" id=fa82ec5303c11cf5031f17fb37f7917b1a0c3f36af3058158ca2b183f42a6b8f namespace=k8s.io Jan 13 20:24:25.823587 containerd[1466]: time="2025-01-13T20:24:25.823501019Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:25.842387 containerd[1466]: time="2025-01-13T20:24:25.842335649Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:24:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:24:25.910377 containerd[1466]: time="2025-01-13T20:24:25.910298438Z" level=error msg="Failed to destroy network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:25.910776 containerd[1466]: time="2025-01-13T20:24:25.910730128Z" level=error msg="encountered an error cleaning up failed sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:25.910869 containerd[1466]: time="2025-01-13T20:24:25.910833571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:25.911518 kubelet[2002]: E0113 20:24:25.911123 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:25.911518 kubelet[2002]: E0113 20:24:25.911188 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:25.911518 kubelet[2002]: E0113 20:24:25.911211 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:25.911984 kubelet[2002]: E0113 20:24:25.911354 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:26.042374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069-shm.mount: Deactivated successfully. Jan 13 20:24:26.566411 kubelet[2002]: E0113 20:24:26.565961 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:26.674374 containerd[1466]: time="2025-01-13T20:24:26.673877326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:24:26.675325 kubelet[2002]: I0113 20:24:26.675286 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069" Jan 13 20:24:26.678708 containerd[1466]: time="2025-01-13T20:24:26.676511585Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:24:26.678708 containerd[1466]: time="2025-01-13T20:24:26.676688589Z" level=info msg="Ensure that sandbox f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069 in task-service has been cleanup successfully" Jan 13 20:24:26.678030 systemd[1]: run-netns-cni\x2d31e65003\x2d98d6\x2dcce9\x2de583\x2d062787d4cf16.mount: Deactivated successfully. Jan 13 20:24:26.679909 containerd[1466]: time="2025-01-13T20:24:26.679754937Z" level=info msg="TearDown network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" successfully" Jan 13 20:24:26.679909 containerd[1466]: time="2025-01-13T20:24:26.679781058Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" returns successfully" Jan 13 20:24:26.680848 containerd[1466]: time="2025-01-13T20:24:26.680818401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:1,}" Jan 13 20:24:26.749022 containerd[1466]: time="2025-01-13T20:24:26.748881208Z" level=error msg="Failed to destroy network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:26.750446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0-shm.mount: Deactivated successfully. Jan 13 20:24:26.750991 containerd[1466]: time="2025-01-13T20:24:26.750789851Z" level=error msg="encountered an error cleaning up failed sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:26.750991 containerd[1466]: time="2025-01-13T20:24:26.750870333Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:26.751354 kubelet[2002]: E0113 20:24:26.751324 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:26.751422 kubelet[2002]: E0113 20:24:26.751381 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:26.751422 kubelet[2002]: E0113 20:24:26.751404 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:26.751483 kubelet[2002]: E0113 20:24:26.751464 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:27.566615 kubelet[2002]: E0113 20:24:27.566552 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:27.679996 kubelet[2002]: I0113 20:24:27.679958 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0" Jan 13 20:24:27.680606 containerd[1466]: time="2025-01-13T20:24:27.680572909Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" Jan 13 20:24:27.681357 containerd[1466]: time="2025-01-13T20:24:27.681180123Z" level=info msg="Ensure that sandbox c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0 in task-service has been cleanup successfully" Jan 13 20:24:27.682583 systemd[1]: run-netns-cni\x2d6fbc17d3\x2dd1cf\x2d8841\x2d4e3c\x2d5128f411976d.mount: Deactivated successfully. Jan 13 20:24:27.683869 containerd[1466]: time="2025-01-13T20:24:27.683625097Z" level=info msg="TearDown network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" successfully" Jan 13 20:24:27.683869 containerd[1466]: time="2025-01-13T20:24:27.683651057Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" returns successfully" Jan 13 20:24:27.684427 containerd[1466]: time="2025-01-13T20:24:27.683981505Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:24:27.684427 containerd[1466]: time="2025-01-13T20:24:27.684056306Z" level=info msg="TearDown network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" successfully" Jan 13 20:24:27.684427 containerd[1466]: time="2025-01-13T20:24:27.684064786Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" returns successfully" Jan 13 20:24:27.684852 containerd[1466]: time="2025-01-13T20:24:27.684706721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:2,}" Jan 13 20:24:27.747817 containerd[1466]: time="2025-01-13T20:24:27.747647270Z" level=error msg="Failed to destroy network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:27.750181 containerd[1466]: time="2025-01-13T20:24:27.749924600Z" level=error msg="encountered an error cleaning up failed sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:27.750181 containerd[1466]: time="2025-01-13T20:24:27.750048003Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:27.750662 kubelet[2002]: E0113 20:24:27.750545 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:27.750662 kubelet[2002]: E0113 20:24:27.750618 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:27.750662 kubelet[2002]: E0113 20:24:27.750640 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:27.751480 kubelet[2002]: E0113 20:24:27.750836 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:27.751023 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d-shm.mount: Deactivated successfully. Jan 13 20:24:28.379770 kubelet[2002]: I0113 20:24:28.379734 2002 topology_manager.go:215] "Topology Admit Handler" podUID="e8884ff3-6d49-4c01-acad-da43d5467f4e" podNamespace="default" podName="nginx-deployment-6d5f899847-xhwkn" Jan 13 20:24:28.390832 systemd[1]: Created slice kubepods-besteffort-pode8884ff3_6d49_4c01_acad_da43d5467f4e.slice - libcontainer container kubepods-besteffort-pode8884ff3_6d49_4c01_acad_da43d5467f4e.slice. Jan 13 20:24:28.457105 kubelet[2002]: I0113 20:24:28.457053 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rn2\" (UniqueName: \"kubernetes.io/projected/e8884ff3-6d49-4c01-acad-da43d5467f4e-kube-api-access-72rn2\") pod \"nginx-deployment-6d5f899847-xhwkn\" (UID: \"e8884ff3-6d49-4c01-acad-da43d5467f4e\") " pod="default/nginx-deployment-6d5f899847-xhwkn" Jan 13 20:24:28.567252 kubelet[2002]: E0113 20:24:28.567131 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:28.687492 kubelet[2002]: I0113 20:24:28.687410 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d" Jan 13 20:24:28.688603 containerd[1466]: time="2025-01-13T20:24:28.688317039Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\"" Jan 13 20:24:28.688603 containerd[1466]: time="2025-01-13T20:24:28.688498403Z" level=info msg="Ensure that sandbox f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d in task-service has been cleanup successfully" Jan 13 20:24:28.690276 systemd[1]: run-netns-cni\x2dec4f2742\x2dd6ab\x2d3613\x2d95ee\x2d4f09e9979d9d.mount: Deactivated successfully. Jan 13 20:24:28.692387 containerd[1466]: time="2025-01-13T20:24:28.691773034Z" level=info msg="TearDown network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" successfully" Jan 13 20:24:28.692387 containerd[1466]: time="2025-01-13T20:24:28.691805515Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" returns successfully" Jan 13 20:24:28.692387 containerd[1466]: time="2025-01-13T20:24:28.692213123Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" Jan 13 20:24:28.692387 containerd[1466]: time="2025-01-13T20:24:28.692341086Z" level=info msg="TearDown network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" successfully" Jan 13 20:24:28.692387 containerd[1466]: time="2025-01-13T20:24:28.692352886Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" returns successfully" Jan 13 20:24:28.693039 containerd[1466]: time="2025-01-13T20:24:28.692639733Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:24:28.693039 containerd[1466]: time="2025-01-13T20:24:28.692729695Z" level=info msg="TearDown network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" successfully" Jan 13 20:24:28.693039 containerd[1466]: time="2025-01-13T20:24:28.692739375Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" returns successfully" Jan 13 20:24:28.693613 containerd[1466]: time="2025-01-13T20:24:28.693531512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:3,}" Jan 13 20:24:28.696447 containerd[1466]: time="2025-01-13T20:24:28.696129969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xhwkn,Uid:e8884ff3-6d49-4c01-acad-da43d5467f4e,Namespace:default,Attempt:0,}" Jan 13 20:24:28.794561 containerd[1466]: time="2025-01-13T20:24:28.794514426Z" level=error msg="Failed to destroy network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:28.796373 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f-shm.mount: Deactivated successfully. Jan 13 20:24:28.796880 containerd[1466]: time="2025-01-13T20:24:28.796845877Z" level=error msg="encountered an error cleaning up failed sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:28.797001 containerd[1466]: time="2025-01-13T20:24:28.796982800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:28.798898 kubelet[2002]: E0113 20:24:28.798869 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:28.798988 kubelet[2002]: E0113 20:24:28.798937 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:28.798988 kubelet[2002]: E0113 20:24:28.798959 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:28.799040 kubelet[2002]: E0113 20:24:28.799022 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:28.831816 containerd[1466]: time="2025-01-13T20:24:28.831760116Z" level=error msg="Failed to destroy network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:28.834353 containerd[1466]: time="2025-01-13T20:24:28.833693038Z" level=error msg="encountered an error cleaning up failed sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:28.834353 containerd[1466]: time="2025-01-13T20:24:28.833770119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xhwkn,Uid:e8884ff3-6d49-4c01-acad-da43d5467f4e,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:28.834526 kubelet[2002]: E0113 20:24:28.834017 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:28.834526 kubelet[2002]: E0113 20:24:28.834076 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-xhwkn" Jan 13 20:24:28.834526 kubelet[2002]: E0113 20:24:28.834097 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-xhwkn" Jan 13 20:24:28.834605 kubelet[2002]: E0113 20:24:28.834199 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-xhwkn_default(e8884ff3-6d49-4c01-acad-da43d5467f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-xhwkn_default(e8884ff3-6d49-4c01-acad-da43d5467f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-xhwkn" podUID="e8884ff3-6d49-4c01-acad-da43d5467f4e" Jan 13 20:24:28.834855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7-shm.mount: Deactivated successfully. Jan 13 20:24:29.568208 kubelet[2002]: E0113 20:24:29.568145 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:29.694725 kubelet[2002]: I0113 20:24:29.693867 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f" Jan 13 20:24:29.695301 containerd[1466]: time="2025-01-13T20:24:29.694969158Z" level=info msg="StopPodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\"" Jan 13 20:24:29.695301 containerd[1466]: time="2025-01-13T20:24:29.695144322Z" level=info msg="Ensure that sandbox d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f in task-service has been cleanup successfully" Jan 13 20:24:29.696949 systemd[1]: run-netns-cni\x2d7511bfa8\x2d8ffa\x2d11f7\x2d991f\x2dfe38288168bd.mount: Deactivated successfully. Jan 13 20:24:29.700704 kubelet[2002]: I0113 20:24:29.700244 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7" Jan 13 20:24:29.701291 containerd[1466]: time="2025-01-13T20:24:29.700888365Z" level=info msg="StopPodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\"" Jan 13 20:24:29.701291 containerd[1466]: time="2025-01-13T20:24:29.701084969Z" level=info msg="Ensure that sandbox 19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7 in task-service has been cleanup successfully" Jan 13 20:24:29.702878 systemd[1]: run-netns-cni\x2d080a005e\x2d6b85\x2dc158\x2d30a9\x2dfa868e4b524f.mount: Deactivated successfully. Jan 13 20:24:29.703718 containerd[1466]: time="2025-01-13T20:24:29.703459820Z" level=info msg="TearDown network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" successfully" Jan 13 20:24:29.703718 containerd[1466]: time="2025-01-13T20:24:29.703490741Z" level=info msg="StopPodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" returns successfully" Jan 13 20:24:29.703995 containerd[1466]: time="2025-01-13T20:24:29.703802147Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\"" Jan 13 20:24:29.703995 containerd[1466]: time="2025-01-13T20:24:29.703915190Z" level=info msg="TearDown network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" successfully" Jan 13 20:24:29.703995 containerd[1466]: time="2025-01-13T20:24:29.703926590Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" returns successfully" Jan 13 20:24:29.705072 containerd[1466]: time="2025-01-13T20:24:29.704220156Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" Jan 13 20:24:29.705072 containerd[1466]: time="2025-01-13T20:24:29.704321438Z" level=info msg="TearDown network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" successfully" Jan 13 20:24:29.705072 containerd[1466]: time="2025-01-13T20:24:29.704335359Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" returns successfully" Jan 13 20:24:29.705072 containerd[1466]: time="2025-01-13T20:24:29.704611285Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:24:29.705072 containerd[1466]: time="2025-01-13T20:24:29.704683366Z" level=info msg="TearDown network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" successfully" Jan 13 20:24:29.705072 containerd[1466]: time="2025-01-13T20:24:29.704692806Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" returns successfully" Jan 13 20:24:29.706106 containerd[1466]: time="2025-01-13T20:24:29.705899192Z" level=info msg="TearDown network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" successfully" Jan 13 20:24:29.706106 containerd[1466]: time="2025-01-13T20:24:29.705922793Z" level=info msg="StopPodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" returns successfully" Jan 13 20:24:29.707109 containerd[1466]: time="2025-01-13T20:24:29.706784931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:4,}" Jan 13 20:24:29.707364 containerd[1466]: time="2025-01-13T20:24:29.706972535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xhwkn,Uid:e8884ff3-6d49-4c01-acad-da43d5467f4e,Namespace:default,Attempt:1,}" Jan 13 20:24:29.822243 containerd[1466]: time="2025-01-13T20:24:29.822086878Z" level=error msg="Failed to destroy network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:29.822873 containerd[1466]: time="2025-01-13T20:24:29.822464926Z" level=error msg="encountered an error cleaning up failed sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:29.822873 containerd[1466]: time="2025-01-13T20:24:29.822539247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:29.822961 kubelet[2002]: E0113 20:24:29.822822 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:29.822961 kubelet[2002]: E0113 20:24:29.822880 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:29.822961 kubelet[2002]: E0113 20:24:29.822912 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:29.823039 kubelet[2002]: E0113 20:24:29.822967 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:29.825876 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8-shm.mount: Deactivated successfully. Jan 13 20:24:29.834850 containerd[1466]: time="2025-01-13T20:24:29.834800830Z" level=error msg="Failed to destroy network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:29.835368 containerd[1466]: time="2025-01-13T20:24:29.835247439Z" level=error msg="encountered an error cleaning up failed sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:29.835368 containerd[1466]: time="2025-01-13T20:24:29.835345441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xhwkn,Uid:e8884ff3-6d49-4c01-acad-da43d5467f4e,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:29.835876 kubelet[2002]: E0113 20:24:29.835800 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:29.836004 kubelet[2002]: E0113 20:24:29.835957 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-xhwkn" Jan 13 20:24:29.836004 kubelet[2002]: E0113 20:24:29.835993 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-xhwkn" Jan 13 20:24:29.836286 kubelet[2002]: E0113 20:24:29.836084 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-xhwkn_default(e8884ff3-6d49-4c01-acad-da43d5467f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-xhwkn_default(e8884ff3-6d49-4c01-acad-da43d5467f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-xhwkn" podUID="e8884ff3-6d49-4c01-acad-da43d5467f4e" Jan 13 20:24:30.569279 kubelet[2002]: E0113 20:24:30.569192 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:30.698145 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74-shm.mount: Deactivated successfully. Jan 13 20:24:30.704348 kubelet[2002]: I0113 20:24:30.703848 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8" Jan 13 20:24:30.704632 containerd[1466]: time="2025-01-13T20:24:30.704589687Z" level=info msg="StopPodSandbox for \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\"" Jan 13 20:24:30.704924 containerd[1466]: time="2025-01-13T20:24:30.704751730Z" level=info msg="Ensure that sandbox 47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8 in task-service has been cleanup successfully" Jan 13 20:24:30.706703 containerd[1466]: time="2025-01-13T20:24:30.706363284Z" level=info msg="TearDown network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\" successfully" Jan 13 20:24:30.706703 containerd[1466]: time="2025-01-13T20:24:30.706400245Z" level=info msg="StopPodSandbox for \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\" returns successfully" Jan 13 20:24:30.707052 containerd[1466]: time="2025-01-13T20:24:30.706841414Z" level=info msg="StopPodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\"" Jan 13 20:24:30.707052 containerd[1466]: time="2025-01-13T20:24:30.706923736Z" level=info msg="TearDown network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" successfully" Jan 13 20:24:30.707052 containerd[1466]: time="2025-01-13T20:24:30.706933336Z" level=info msg="StopPodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" returns successfully" Jan 13 20:24:30.707434 containerd[1466]: time="2025-01-13T20:24:30.707403306Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\"" Jan 13 20:24:30.707490 containerd[1466]: time="2025-01-13T20:24:30.707481148Z" level=info msg="TearDown network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" successfully" Jan 13 20:24:30.707514 containerd[1466]: time="2025-01-13T20:24:30.707490268Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" returns successfully" Jan 13 20:24:30.707667 systemd[1]: run-netns-cni\x2d05c6ced1\x2d92b4\x2d010b\x2d2cb0\x2de8236d6771e0.mount: Deactivated successfully. Jan 13 20:24:30.709766 containerd[1466]: time="2025-01-13T20:24:30.707719913Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" Jan 13 20:24:30.709766 containerd[1466]: time="2025-01-13T20:24:30.707779674Z" level=info msg="TearDown network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" successfully" Jan 13 20:24:30.709766 containerd[1466]: time="2025-01-13T20:24:30.707787994Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" returns successfully" Jan 13 20:24:30.709766 containerd[1466]: time="2025-01-13T20:24:30.708420728Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:24:30.709766 containerd[1466]: time="2025-01-13T20:24:30.708500289Z" level=info msg="TearDown network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" successfully" Jan 13 20:24:30.709766 containerd[1466]: time="2025-01-13T20:24:30.708509729Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" returns successfully" Jan 13 20:24:30.709766 containerd[1466]: time="2025-01-13T20:24:30.708997100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:5,}" Jan 13 20:24:30.720047 kubelet[2002]: I0113 20:24:30.720013 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74" Jan 13 20:24:30.724931 containerd[1466]: time="2025-01-13T20:24:30.724889514Z" level=info msg="StopPodSandbox for \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\"" Jan 13 20:24:30.725146 containerd[1466]: time="2025-01-13T20:24:30.725051518Z" level=info msg="Ensure that sandbox 848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74 in task-service has been cleanup successfully" Jan 13 20:24:30.727642 containerd[1466]: time="2025-01-13T20:24:30.727525730Z" level=info msg="TearDown network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\" successfully" Jan 13 20:24:30.727642 containerd[1466]: time="2025-01-13T20:24:30.727555371Z" level=info msg="StopPodSandbox for \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\" returns successfully" Jan 13 20:24:30.731032 containerd[1466]: time="2025-01-13T20:24:30.728760436Z" level=info msg="StopPodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\"" Jan 13 20:24:30.731032 containerd[1466]: time="2025-01-13T20:24:30.728844078Z" level=info msg="TearDown network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" successfully" Jan 13 20:24:30.731032 containerd[1466]: time="2025-01-13T20:24:30.728853238Z" level=info msg="StopPodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" returns successfully" Jan 13 20:24:30.730451 systemd[1]: run-netns-cni\x2d2e3f3645\x2d1d0a\x2dd9ea\x2d1a33\x2df4f5eb322949.mount: Deactivated successfully. Jan 13 20:24:30.731303 containerd[1466]: time="2025-01-13T20:24:30.731273409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xhwkn,Uid:e8884ff3-6d49-4c01-acad-da43d5467f4e,Namespace:default,Attempt:2,}" Jan 13 20:24:30.801044 containerd[1466]: time="2025-01-13T20:24:30.800967237Z" level=error msg="Failed to destroy network for sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:30.816059 containerd[1466]: time="2025-01-13T20:24:30.815680147Z" level=error msg="encountered an error cleaning up failed sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:30.816059 containerd[1466]: time="2025-01-13T20:24:30.815831950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:30.816303 kubelet[2002]: E0113 20:24:30.816205 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:30.816368 kubelet[2002]: E0113 20:24:30.816305 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:30.816368 kubelet[2002]: E0113 20:24:30.816348 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2b7r8" Jan 13 20:24:30.816461 kubelet[2002]: E0113 20:24:30.816435 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2b7r8_calico-system(213cfc76-d65d-488d-bb43-a25e993a2250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2b7r8" podUID="213cfc76-d65d-488d-bb43-a25e993a2250" Jan 13 20:24:30.881856 containerd[1466]: time="2025-01-13T20:24:30.881707058Z" level=error msg="Failed to destroy network for sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:30.883244 containerd[1466]: time="2025-01-13T20:24:30.882985285Z" level=error msg="encountered an error cleaning up failed sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:30.883244 containerd[1466]: time="2025-01-13T20:24:30.883053446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xhwkn,Uid:e8884ff3-6d49-4c01-acad-da43d5467f4e,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:30.883518 kubelet[2002]: E0113 20:24:30.883409 2002 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:24:30.883518 kubelet[2002]: E0113 20:24:30.883467 2002 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-xhwkn" Jan 13 20:24:30.883518 kubelet[2002]: E0113 20:24:30.883487 2002 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-xhwkn" Jan 13 20:24:30.883818 kubelet[2002]: E0113 20:24:30.883538 2002 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-xhwkn_default(e8884ff3-6d49-4c01-acad-da43d5467f4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-xhwkn_default(e8884ff3-6d49-4c01-acad-da43d5467f4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-xhwkn" podUID="e8884ff3-6d49-4c01-acad-da43d5467f4e" Jan 13 20:24:30.992572 containerd[1466]: time="2025-01-13T20:24:30.991893619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:30.993816 containerd[1466]: time="2025-01-13T20:24:30.993780179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 20:24:30.994874 containerd[1466]: time="2025-01-13T20:24:30.994849281Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:31.004458 containerd[1466]: time="2025-01-13T20:24:31.004413402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:31.005134 containerd[1466]: time="2025-01-13T20:24:31.005107256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.331148169s" Jan 13 20:24:31.005289 containerd[1466]: time="2025-01-13T20:24:31.005253659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 20:24:31.012747 containerd[1466]: time="2025-01-13T20:24:31.012616772Z" level=info msg="CreateContainer within sandbox \"22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:24:31.029533 containerd[1466]: time="2025-01-13T20:24:31.029446081Z" level=info msg="CreateContainer within sandbox \"22765feba3bb1e271fd399113c9ea0bb0b61c0bc408db34ee6b748ec8f44700b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a894e13e0c5e81ea7172799908b2e9e59100386253cfbc246c277162a9293765\"" Jan 13 20:24:31.032286 containerd[1466]: time="2025-01-13T20:24:31.030255178Z" level=info msg="StartContainer for \"a894e13e0c5e81ea7172799908b2e9e59100386253cfbc246c277162a9293765\"" Jan 13 20:24:31.060490 systemd[1]: Started cri-containerd-a894e13e0c5e81ea7172799908b2e9e59100386253cfbc246c277162a9293765.scope - libcontainer container a894e13e0c5e81ea7172799908b2e9e59100386253cfbc246c277162a9293765. Jan 13 20:24:31.095814 containerd[1466]: time="2025-01-13T20:24:31.095589414Z" level=info msg="StartContainer for \"a894e13e0c5e81ea7172799908b2e9e59100386253cfbc246c277162a9293765\" returns successfully" Jan 13 20:24:31.193316 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:24:31.193472 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:24:31.570348 kubelet[2002]: E0113 20:24:31.570171 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:31.702537 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155-shm.mount: Deactivated successfully. Jan 13 20:24:31.702645 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2-shm.mount: Deactivated successfully. Jan 13 20:24:31.702709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340173236.mount: Deactivated successfully. Jan 13 20:24:31.728303 kubelet[2002]: I0113 20:24:31.726948 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2" Jan 13 20:24:31.728468 containerd[1466]: time="2025-01-13T20:24:31.727890894Z" level=info msg="StopPodSandbox for \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\"" Jan 13 20:24:31.728468 containerd[1466]: time="2025-01-13T20:24:31.728147620Z" level=info msg="Ensure that sandbox 166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2 in task-service has been cleanup successfully" Jan 13 20:24:31.730329 containerd[1466]: time="2025-01-13T20:24:31.728777993Z" level=info msg="TearDown network for sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\" successfully" Jan 13 20:24:31.730329 containerd[1466]: time="2025-01-13T20:24:31.728805513Z" level=info msg="StopPodSandbox for \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\" returns successfully" Jan 13 20:24:31.732466 systemd[1]: run-netns-cni\x2d0059c589\x2d3b76\x2dc16c\x2d127c\x2d080d3d01859a.mount: Deactivated successfully. Jan 13 20:24:31.733326 containerd[1466]: time="2025-01-13T20:24:31.733292246Z" level=info msg="StopPodSandbox for \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\"" Jan 13 20:24:31.733459 containerd[1466]: time="2025-01-13T20:24:31.733393209Z" level=info msg="TearDown network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\" successfully" Jan 13 20:24:31.733459 containerd[1466]: time="2025-01-13T20:24:31.733405129Z" level=info msg="StopPodSandbox for \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\" returns successfully" Jan 13 20:24:31.735251 containerd[1466]: time="2025-01-13T20:24:31.735185806Z" level=info msg="StopPodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\"" Jan 13 20:24:31.735436 containerd[1466]: time="2025-01-13T20:24:31.735412490Z" level=info msg="TearDown network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" successfully" Jan 13 20:24:31.735436 containerd[1466]: time="2025-01-13T20:24:31.735429611Z" level=info msg="StopPodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" returns successfully" Jan 13 20:24:31.736447 containerd[1466]: time="2025-01-13T20:24:31.736420791Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\"" Jan 13 20:24:31.736530 containerd[1466]: time="2025-01-13T20:24:31.736498553Z" level=info msg="TearDown network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" successfully" Jan 13 20:24:31.736530 containerd[1466]: time="2025-01-13T20:24:31.736507353Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" returns successfully" Jan 13 20:24:31.737048 containerd[1466]: time="2025-01-13T20:24:31.737022884Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" Jan 13 20:24:31.737112 containerd[1466]: time="2025-01-13T20:24:31.737092085Z" level=info msg="TearDown network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" successfully" Jan 13 20:24:31.737112 containerd[1466]: time="2025-01-13T20:24:31.737103046Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" returns successfully" Jan 13 20:24:31.737420 kubelet[2002]: I0113 20:24:31.737394 2002 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155" Jan 13 20:24:31.738143 containerd[1466]: time="2025-01-13T20:24:31.738008944Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:24:31.738143 containerd[1466]: time="2025-01-13T20:24:31.738081946Z" level=info msg="TearDown network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" successfully" Jan 13 20:24:31.738143 containerd[1466]: time="2025-01-13T20:24:31.738093826Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" returns successfully" Jan 13 20:24:31.738143 containerd[1466]: time="2025-01-13T20:24:31.738123467Z" level=info msg="StopPodSandbox for \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\"" Jan 13 20:24:31.738330 containerd[1466]: time="2025-01-13T20:24:31.738304150Z" level=info msg="Ensure that sandbox 1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155 in task-service has been cleanup successfully" Jan 13 20:24:31.739811 systemd[1]: run-netns-cni\x2d76428867\x2dff53\x2d0285\x2d4c75\x2d2b08aa7c2bb6.mount: Deactivated successfully. Jan 13 20:24:31.740513 containerd[1466]: time="2025-01-13T20:24:31.740336513Z" level=info msg="TearDown network for sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\" successfully" Jan 13 20:24:31.740513 containerd[1466]: time="2025-01-13T20:24:31.740361393Z" level=info msg="StopPodSandbox for \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\" returns successfully" Jan 13 20:24:31.742934 containerd[1466]: time="2025-01-13T20:24:31.741905105Z" level=info msg="StopPodSandbox for \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\"" Jan 13 20:24:31.742934 containerd[1466]: time="2025-01-13T20:24:31.741966946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:6,}" Jan 13 20:24:31.742934 containerd[1466]: time="2025-01-13T20:24:31.741988147Z" level=info msg="TearDown network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\" successfully" Jan 13 20:24:31.742934 containerd[1466]: time="2025-01-13T20:24:31.742681481Z" level=info msg="StopPodSandbox for \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\" returns successfully" Jan 13 20:24:31.743107 containerd[1466]: time="2025-01-13T20:24:31.742962047Z" level=info msg="StopPodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\"" Jan 13 20:24:31.743107 containerd[1466]: time="2025-01-13T20:24:31.743033049Z" level=info msg="TearDown network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" successfully" Jan 13 20:24:31.743107 containerd[1466]: time="2025-01-13T20:24:31.743042609Z" level=info msg="StopPodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" returns successfully" Jan 13 20:24:31.743790 containerd[1466]: time="2025-01-13T20:24:31.743763504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xhwkn,Uid:e8884ff3-6d49-4c01-acad-da43d5467f4e,Namespace:default,Attempt:3,}" Jan 13 20:24:31.762576 kubelet[2002]: I0113 20:24:31.762235 2002 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-qkncz" podStartSLOduration=4.490622639 podStartE2EDuration="15.762180966s" podCreationTimestamp="2025-01-13 20:24:16 +0000 UTC" firstStartedPulling="2025-01-13 20:24:19.733986778 +0000 UTC m=+3.774897580" lastFinishedPulling="2025-01-13 20:24:31.005545105 +0000 UTC m=+15.046455907" observedRunningTime="2025-01-13 20:24:31.761993082 +0000 UTC m=+15.802903884" watchObservedRunningTime="2025-01-13 20:24:31.762180966 +0000 UTC m=+15.803091728" Jan 13 20:24:31.953376 systemd-networkd[1362]: cali5290314e490: Link UP Jan 13 20:24:31.954474 systemd-networkd[1362]: cali5290314e490: Gained carrier Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.826 [INFO][2823] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.844 [INFO][2823] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-csi--node--driver--2b7r8-eth0 csi-node-driver- calico-system 213cfc76-d65d-488d-bb43-a25e993a2250 1425 0 2025-01-13 20:24:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.4 csi-node-driver-2b7r8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5290314e490 [] []}} ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Namespace="calico-system" Pod="csi-node-driver-2b7r8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2b7r8-" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.844 [INFO][2823] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Namespace="calico-system" Pod="csi-node-driver-2b7r8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.887 [INFO][2842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" HandleID="k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Workload="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.903 [INFO][2842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" HandleID="k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Workload="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316e70), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.4", "pod":"csi-node-driver-2b7r8", "timestamp":"2025-01-13 20:24:31.887085958 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.903 [INFO][2842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.903 [INFO][2842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.903 [INFO][2842] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.908 [INFO][2842] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.914 [INFO][2842] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.921 [INFO][2842] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.924 [INFO][2842] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.926 [INFO][2842] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.927 [INFO][2842] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.929 [INFO][2842] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.934 [INFO][2842] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.941 [INFO][2842] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.941 [INFO][2842] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" host="10.0.0.4" Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.941 [INFO][2842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:24:31.969111 containerd[1466]: 2025-01-13 20:24:31.942 [INFO][2842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" HandleID="k8s-pod-network.80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Workload="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" Jan 13 20:24:31.969698 containerd[1466]: 2025-01-13 20:24:31.945 [INFO][2823] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Namespace="calico-system" Pod="csi-node-driver-2b7r8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--2b7r8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"213cfc76-d65d-488d-bb43-a25e993a2250", ResourceVersion:"1425", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"csi-node-driver-2b7r8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5290314e490", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:31.969698 containerd[1466]: 2025-01-13 20:24:31.945 [INFO][2823] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.193/32] ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Namespace="calico-system" Pod="csi-node-driver-2b7r8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" Jan 13 20:24:31.969698 containerd[1466]: 2025-01-13 20:24:31.945 [INFO][2823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5290314e490 ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Namespace="calico-system" Pod="csi-node-driver-2b7r8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" Jan 13 20:24:31.969698 containerd[1466]: 2025-01-13 20:24:31.954 [INFO][2823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Namespace="calico-system" Pod="csi-node-driver-2b7r8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" Jan 13 20:24:31.969698 containerd[1466]: 2025-01-13 20:24:31.955 [INFO][2823] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Namespace="calico-system" Pod="csi-node-driver-2b7r8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-csi--node--driver--2b7r8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"213cfc76-d65d-488d-bb43-a25e993a2250", ResourceVersion:"1425", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b", Pod:"csi-node-driver-2b7r8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5290314e490", MAC:"3e:e3:d4:5c:8b:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:31.969698 containerd[1466]: 2025-01-13 20:24:31.967 [INFO][2823] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b" Namespace="calico-system" Pod="csi-node-driver-2b7r8" WorkloadEndpoint="10.0.0.4-k8s-csi--node--driver--2b7r8-eth0" Jan 13 20:24:31.991367 containerd[1466]: time="2025-01-13T20:24:31.991287800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:31.991581 systemd-networkd[1362]: cali1148fbd4b08: Link UP Jan 13 20:24:31.993077 systemd-networkd[1362]: cali1148fbd4b08: Gained carrier Jan 13 20:24:31.993543 containerd[1466]: time="2025-01-13T20:24:31.993238960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:31.995672 containerd[1466]: time="2025-01-13T20:24:31.995606130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:31.997500 containerd[1466]: time="2025-01-13T20:24:31.996515988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.808 [INFO][2808] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.833 [INFO][2808] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0 nginx-deployment-6d5f899847- default e8884ff3-6d49-4c01-acad-da43d5467f4e 1494 0 2025-01-13 20:24:28 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 nginx-deployment-6d5f899847-xhwkn eth0 default [] [] [kns.default ksa.default.default] cali1148fbd4b08 [] []}} ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Namespace="default" Pod="nginx-deployment-6d5f899847-xhwkn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.834 [INFO][2808] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Namespace="default" Pod="nginx-deployment-6d5f899847-xhwkn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.887 [INFO][2838] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" HandleID="k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.909 [INFO][2838] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" HandleID="k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c9b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nginx-deployment-6d5f899847-xhwkn", "timestamp":"2025-01-13 20:24:31.887077438 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.909 [INFO][2838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.942 [INFO][2838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.942 [INFO][2838] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.944 [INFO][2838] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.950 [INFO][2838] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.959 [INFO][2838] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.961 [INFO][2838] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.965 [INFO][2838] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.965 [INFO][2838] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.968 [INFO][2838] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56 Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.975 [INFO][2838] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.985 [INFO][2838] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.985 [INFO][2838] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" host="10.0.0.4" Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.985 [INFO][2838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:24:32.005312 containerd[1466]: 2025-01-13 20:24:31.985 [INFO][2838] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" HandleID="k8s-pod-network.02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Workload="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" Jan 13 20:24:32.006061 containerd[1466]: 2025-01-13 20:24:31.987 [INFO][2808] cni-plugin/k8s.go 386: Populated endpoint ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Namespace="default" Pod="nginx-deployment-6d5f899847-xhwkn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"e8884ff3-6d49-4c01-acad-da43d5467f4e", ResourceVersion:"1494", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nginx-deployment-6d5f899847-xhwkn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1148fbd4b08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:32.006061 containerd[1466]: 2025-01-13 20:24:31.988 [INFO][2808] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.194/32] ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Namespace="default" Pod="nginx-deployment-6d5f899847-xhwkn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" Jan 13 20:24:32.006061 containerd[1466]: 2025-01-13 20:24:31.988 [INFO][2808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1148fbd4b08 ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Namespace="default" Pod="nginx-deployment-6d5f899847-xhwkn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" Jan 13 20:24:32.006061 containerd[1466]: 2025-01-13 20:24:31.995 [INFO][2808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Namespace="default" Pod="nginx-deployment-6d5f899847-xhwkn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" Jan 13 20:24:32.006061 containerd[1466]: 2025-01-13 20:24:31.996 [INFO][2808] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Namespace="default" Pod="nginx-deployment-6d5f899847-xhwkn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"e8884ff3-6d49-4c01-acad-da43d5467f4e", ResourceVersion:"1494", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56", Pod:"nginx-deployment-6d5f899847-xhwkn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali1148fbd4b08", MAC:"3e:fb:7b:f2:a5:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:32.006061 containerd[1466]: 2025-01-13 20:24:32.003 [INFO][2808] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56" Namespace="default" Pod="nginx-deployment-6d5f899847-xhwkn" WorkloadEndpoint="10.0.0.4-k8s-nginx--deployment--6d5f899847--xhwkn-eth0" Jan 13 20:24:32.020647 systemd[1]: Started cri-containerd-80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b.scope - libcontainer container 80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b. Jan 13 20:24:32.036424 containerd[1466]: time="2025-01-13T20:24:32.035364984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:32.036424 containerd[1466]: time="2025-01-13T20:24:32.035576668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:32.036424 containerd[1466]: time="2025-01-13T20:24:32.035592629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:32.036424 containerd[1466]: time="2025-01-13T20:24:32.035685191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:32.055827 containerd[1466]: time="2025-01-13T20:24:32.055787682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2b7r8,Uid:213cfc76-d65d-488d-bb43-a25e993a2250,Namespace:calico-system,Attempt:6,} returns sandbox id \"80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b\"" Jan 13 20:24:32.056529 systemd[1]: Started cri-containerd-02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56.scope - libcontainer container 02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56. Jan 13 20:24:32.060606 containerd[1466]: time="2025-01-13T20:24:32.060145251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:24:32.094508 containerd[1466]: time="2025-01-13T20:24:32.094472513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-xhwkn,Uid:e8884ff3-6d49-4c01-acad-da43d5467f4e,Namespace:default,Attempt:3,} returns sandbox id \"02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56\"" Jan 13 20:24:32.570973 kubelet[2002]: E0113 20:24:32.570918 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:32.772376 kernel: bpftool[3079]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:24:32.788499 systemd[1]: run-containerd-runc-k8s.io-a894e13e0c5e81ea7172799908b2e9e59100386253cfbc246c277162a9293765-runc.IKiixK.mount: Deactivated successfully. Jan 13 20:24:32.972111 systemd-networkd[1362]: vxlan.calico: Link UP Jan 13 20:24:32.974319 systemd-networkd[1362]: vxlan.calico: Gained carrier Jan 13 20:24:33.154991 systemd-networkd[1362]: cali1148fbd4b08: Gained IPv6LL Jan 13 20:24:33.157374 systemd-networkd[1362]: cali5290314e490: Gained IPv6LL Jan 13 20:24:33.439024 containerd[1466]: time="2025-01-13T20:24:33.438934629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:33.441182 containerd[1466]: time="2025-01-13T20:24:33.441088833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 20:24:33.442389 containerd[1466]: time="2025-01-13T20:24:33.442329658Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:33.444491 containerd[1466]: time="2025-01-13T20:24:33.444439620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:33.445824 containerd[1466]: time="2025-01-13T20:24:33.445783847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.385485753s" Jan 13 20:24:33.445889 containerd[1466]: time="2025-01-13T20:24:33.445816968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 20:24:33.446877 containerd[1466]: time="2025-01-13T20:24:33.446831749Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:24:33.447854 containerd[1466]: time="2025-01-13T20:24:33.447795648Z" level=info msg="CreateContainer within sandbox \"80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:24:33.469444 containerd[1466]: time="2025-01-13T20:24:33.469389003Z" level=info msg="CreateContainer within sandbox \"80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"200cf76f60abc0da91e259a7011a8b4968f974a5d57021c372cc94138caf60e0\"" Jan 13 20:24:33.470557 containerd[1466]: time="2025-01-13T20:24:33.470456105Z" level=info msg="StartContainer for \"200cf76f60abc0da91e259a7011a8b4968f974a5d57021c372cc94138caf60e0\"" Jan 13 20:24:33.498455 systemd[1]: Started cri-containerd-200cf76f60abc0da91e259a7011a8b4968f974a5d57021c372cc94138caf60e0.scope - libcontainer container 200cf76f60abc0da91e259a7011a8b4968f974a5d57021c372cc94138caf60e0. Jan 13 20:24:33.532030 containerd[1466]: time="2025-01-13T20:24:33.531875742Z" level=info msg="StartContainer for \"200cf76f60abc0da91e259a7011a8b4968f974a5d57021c372cc94138caf60e0\" returns successfully" Jan 13 20:24:33.571981 kubelet[2002]: E0113 20:24:33.571918 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:34.573050 kubelet[2002]: E0113 20:24:34.572982 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:34.755506 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Jan 13 20:24:35.574041 kubelet[2002]: E0113 20:24:35.573997 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:35.775470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402372164.mount: Deactivated successfully. Jan 13 20:24:36.509060 containerd[1466]: time="2025-01-13T20:24:36.507962268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:36.510583 containerd[1466]: time="2025-01-13T20:24:36.510539438Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 20:24:36.511644 containerd[1466]: time="2025-01-13T20:24:36.511579938Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:36.519844 containerd[1466]: time="2025-01-13T20:24:36.519775496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:36.521217 containerd[1466]: time="2025-01-13T20:24:36.521172763Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 3.074298214s" Jan 13 20:24:36.521217 containerd[1466]: time="2025-01-13T20:24:36.521213004Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:24:36.523816 containerd[1466]: time="2025-01-13T20:24:36.522506429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:24:36.525384 containerd[1466]: time="2025-01-13T20:24:36.525345484Z" level=info msg="CreateContainer within sandbox \"02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:24:36.539759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2685722327.mount: Deactivated successfully. Jan 13 20:24:36.545284 containerd[1466]: time="2025-01-13T20:24:36.545211867Z" level=info msg="CreateContainer within sandbox \"02cc23630a3f5733f81792427ae99b223f8e1bd9c78d6905af8052231863ef56\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0cb78a9f12160e52c564bef2a628d05379891e06b88e4e289e03fba6d045dc04\"" Jan 13 20:24:36.547510 containerd[1466]: time="2025-01-13T20:24:36.545875800Z" level=info msg="StartContainer for \"0cb78a9f12160e52c564bef2a628d05379891e06b88e4e289e03fba6d045dc04\"" Jan 13 20:24:36.557629 kubelet[2002]: E0113 20:24:36.557582 2002 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:36.574992 kubelet[2002]: E0113 20:24:36.574946 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:36.587511 systemd[1]: Started cri-containerd-0cb78a9f12160e52c564bef2a628d05379891e06b88e4e289e03fba6d045dc04.scope - libcontainer container 0cb78a9f12160e52c564bef2a628d05379891e06b88e4e289e03fba6d045dc04. Jan 13 20:24:36.618753 containerd[1466]: time="2025-01-13T20:24:36.618708687Z" level=info msg="StartContainer for \"0cb78a9f12160e52c564bef2a628d05379891e06b88e4e289e03fba6d045dc04\" returns successfully" Jan 13 20:24:37.575233 kubelet[2002]: E0113 20:24:37.575147 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:37.891746 containerd[1466]: time="2025-01-13T20:24:37.890495538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:37.891746 containerd[1466]: time="2025-01-13T20:24:37.891563718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 20:24:37.892663 containerd[1466]: time="2025-01-13T20:24:37.892630058Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:37.895152 containerd[1466]: time="2025-01-13T20:24:37.895105025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:37.896129 containerd[1466]: time="2025-01-13T20:24:37.896091484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.373547655s" Jan 13 20:24:37.896129 containerd[1466]: time="2025-01-13T20:24:37.896126525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 20:24:37.898542 containerd[1466]: time="2025-01-13T20:24:37.898340047Z" level=info msg="CreateContainer within sandbox \"80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:24:37.915726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2882823083.mount: Deactivated successfully. Jan 13 20:24:37.921639 containerd[1466]: time="2025-01-13T20:24:37.921441047Z" level=info msg="CreateContainer within sandbox \"80ee289a92ee02bd30175229536aa5781436030a5bea29e5b69ad8ca65d22e7b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b4f0eb779b8e49eeeb8b50bb23c5114742a218303d31afe5e533cb9cca7a23e3\"" Jan 13 20:24:37.923317 containerd[1466]: time="2025-01-13T20:24:37.922127020Z" level=info msg="StartContainer for \"b4f0eb779b8e49eeeb8b50bb23c5114742a218303d31afe5e533cb9cca7a23e3\"" Jan 13 20:24:37.954447 systemd[1]: Started cri-containerd-b4f0eb779b8e49eeeb8b50bb23c5114742a218303d31afe5e533cb9cca7a23e3.scope - libcontainer container b4f0eb779b8e49eeeb8b50bb23c5114742a218303d31afe5e533cb9cca7a23e3. Jan 13 20:24:37.988213 containerd[1466]: time="2025-01-13T20:24:37.988160198Z" level=info msg="StartContainer for \"b4f0eb779b8e49eeeb8b50bb23c5114742a218303d31afe5e533cb9cca7a23e3\" returns successfully" Jan 13 20:24:38.575986 kubelet[2002]: E0113 20:24:38.575911 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:38.650685 kubelet[2002]: I0113 20:24:38.650656 2002 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:24:38.650685 kubelet[2002]: I0113 20:24:38.650689 2002 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:24:38.808230 kubelet[2002]: I0113 20:24:38.808179 2002 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-2b7r8" podStartSLOduration=16.970573191 podStartE2EDuration="22.808113896s" podCreationTimestamp="2025-01-13 20:24:16 +0000 UTC" firstStartedPulling="2025-01-13 20:24:32.058875345 +0000 UTC m=+16.099786147" lastFinishedPulling="2025-01-13 20:24:37.89641605 +0000 UTC m=+21.937326852" observedRunningTime="2025-01-13 20:24:38.807987053 +0000 UTC m=+22.848897895" watchObservedRunningTime="2025-01-13 20:24:38.808113896 +0000 UTC m=+22.849024738" Jan 13 20:24:38.808513 kubelet[2002]: I0113 20:24:38.808486 2002 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-xhwkn" podStartSLOduration=6.382177422 podStartE2EDuration="10.808442422s" podCreationTimestamp="2025-01-13 20:24:28 +0000 UTC" firstStartedPulling="2025-01-13 20:24:32.096049145 +0000 UTC m=+16.136959947" lastFinishedPulling="2025-01-13 20:24:36.522314105 +0000 UTC m=+20.563224947" observedRunningTime="2025-01-13 20:24:36.794548963 +0000 UTC m=+20.835459765" watchObservedRunningTime="2025-01-13 20:24:38.808442422 +0000 UTC m=+22.849353264" Jan 13 20:24:39.576597 kubelet[2002]: E0113 20:24:39.576542 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:40.577558 kubelet[2002]: E0113 20:24:40.577473 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:41.578123 kubelet[2002]: E0113 20:24:41.577922 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:42.578769 kubelet[2002]: E0113 20:24:42.578704 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:43.016686 kubelet[2002]: I0113 20:24:43.016642 2002 topology_manager.go:215] "Topology Admit Handler" podUID="d56cf216-0876-4c92-bece-743714e0ae77" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 20:24:43.025451 systemd[1]: Created slice kubepods-besteffort-podd56cf216_0876_4c92_bece_743714e0ae77.slice - libcontainer container kubepods-besteffort-podd56cf216_0876_4c92_bece_743714e0ae77.slice. Jan 13 20:24:43.053659 kubelet[2002]: I0113 20:24:43.052497 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d56cf216-0876-4c92-bece-743714e0ae77-data\") pod \"nfs-server-provisioner-0\" (UID: \"d56cf216-0876-4c92-bece-743714e0ae77\") " pod="default/nfs-server-provisioner-0" Jan 13 20:24:43.053659 kubelet[2002]: I0113 20:24:43.052569 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww67x\" (UniqueName: \"kubernetes.io/projected/d56cf216-0876-4c92-bece-743714e0ae77-kube-api-access-ww67x\") pod \"nfs-server-provisioner-0\" (UID: \"d56cf216-0876-4c92-bece-743714e0ae77\") " pod="default/nfs-server-provisioner-0" Jan 13 20:24:43.330104 containerd[1466]: time="2025-01-13T20:24:43.329610546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d56cf216-0876-4c92-bece-743714e0ae77,Namespace:default,Attempt:0,}" Jan 13 20:24:43.487297 systemd-networkd[1362]: cali60e51b789ff: Link UP Jan 13 20:24:43.487736 systemd-networkd[1362]: cali60e51b789ff: Gained carrier Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.387 [INFO][3360] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d56cf216-0876-4c92-bece-743714e0ae77 1594 0 2025-01-13 20:24:43 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.4 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.387 [INFO][3360] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.419 [INFO][3367] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" HandleID="k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.441 [INFO][3367] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" HandleID="k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cb30), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-13 20:24:43.419814658 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.441 [INFO][3367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.441 [INFO][3367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.441 [INFO][3367] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.444 [INFO][3367] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.450 [INFO][3367] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.457 [INFO][3367] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.461 [INFO][3367] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.464 [INFO][3367] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.464 [INFO][3367] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.466 [INFO][3367] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394 Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.472 [INFO][3367] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.480 [INFO][3367] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.480 [INFO][3367] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" host="10.0.0.4" Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.480 [INFO][3367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:24:43.509421 containerd[1466]: 2025-01-13 20:24:43.480 [INFO][3367] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" HandleID="k8s-pod-network.1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Workload="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:43.511323 containerd[1466]: 2025-01-13 20:24:43.482 [INFO][3360] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d56cf216-0876-4c92-bece-743714e0ae77", ResourceVersion:"1594", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:43.511323 containerd[1466]: 2025-01-13 20:24:43.483 [INFO][3360] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.195/32] ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:43.511323 containerd[1466]: 2025-01-13 20:24:43.483 [INFO][3360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:43.511323 containerd[1466]: 2025-01-13 20:24:43.487 [INFO][3360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:43.511554 containerd[1466]: 2025-01-13 20:24:43.489 [INFO][3360] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d56cf216-0876-4c92-bece-743714e0ae77", ResourceVersion:"1594", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"26:64:6c:3e:44:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:43.511554 containerd[1466]: 2025-01-13 20:24:43.504 [INFO][3360] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.4-k8s-nfs--server--provisioner--0-eth0" Jan 13 20:24:43.531591 containerd[1466]: time="2025-01-13T20:24:43.531463508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:43.531800 containerd[1466]: time="2025-01-13T20:24:43.531609550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:43.531800 containerd[1466]: time="2025-01-13T20:24:43.531649631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:43.533350 containerd[1466]: time="2025-01-13T20:24:43.532178881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:43.557597 systemd[1]: Started cri-containerd-1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394.scope - libcontainer container 1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394. Jan 13 20:24:43.579493 kubelet[2002]: E0113 20:24:43.579445 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:43.596188 containerd[1466]: time="2025-01-13T20:24:43.596021887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d56cf216-0876-4c92-bece-743714e0ae77,Namespace:default,Attempt:0,} returns sandbox id \"1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394\"" Jan 13 20:24:43.598900 containerd[1466]: time="2025-01-13T20:24:43.598837017Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:24:43.908674 update_engine[1457]: I20250113 20:24:43.908588 1457 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 20:24:43.908674 update_engine[1457]: I20250113 20:24:43.908643 1457 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 20:24:43.909357 update_engine[1457]: I20250113 20:24:43.908866 1457 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 20:24:43.909357 update_engine[1457]: I20250113 20:24:43.909296 1457 omaha_request_params.cc:62] Current group set to stable Jan 13 20:24:43.909495 update_engine[1457]: I20250113 20:24:43.909417 1457 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 20:24:43.909495 update_engine[1457]: I20250113 20:24:43.909430 1457 update_attempter.cc:643] Scheduling an action processor start. Jan 13 20:24:43.909495 update_engine[1457]: I20250113 20:24:43.909446 1457 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:24:43.909495 update_engine[1457]: I20250113 20:24:43.909472 1457 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 20:24:43.909678 update_engine[1457]: I20250113 20:24:43.909526 1457 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:24:43.909678 update_engine[1457]: I20250113 20:24:43.909535 1457 omaha_request_action.cc:272] Request: Jan 13 20:24:43.909678 update_engine[1457]: Jan 13 20:24:43.909678 update_engine[1457]: Jan 13 20:24:43.909678 update_engine[1457]: Jan 13 20:24:43.909678 update_engine[1457]: Jan 13 20:24:43.909678 update_engine[1457]: Jan 13 20:24:43.909678 update_engine[1457]: Jan 13 20:24:43.909678 update_engine[1457]: Jan 13 20:24:43.909678 update_engine[1457]: Jan 13 20:24:43.909678 update_engine[1457]: I20250113 20:24:43.909541 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:24:43.912009 update_engine[1457]: I20250113 20:24:43.911679 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:24:43.912241 locksmithd[1486]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 20:24:43.912984 update_engine[1457]: I20250113 20:24:43.912164 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:24:43.912984 update_engine[1457]: E20250113 20:24:43.912933 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:24:43.913502 update_engine[1457]: I20250113 20:24:43.913002 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 20:24:44.581277 kubelet[2002]: E0113 20:24:44.581220 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:45.186556 systemd-networkd[1362]: cali60e51b789ff: Gained IPv6LL Jan 13 20:24:45.374678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689849335.mount: Deactivated successfully. Jan 13 20:24:45.581944 kubelet[2002]: E0113 20:24:45.581331 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:46.582368 kubelet[2002]: E0113 20:24:46.582319 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:46.848490 containerd[1466]: time="2025-01-13T20:24:46.847142768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:46.849886 containerd[1466]: time="2025-01-13T20:24:46.849817974Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373691" Jan 13 20:24:46.851543 containerd[1466]: time="2025-01-13T20:24:46.851465002Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:46.854862 containerd[1466]: time="2025-01-13T20:24:46.854792738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:46.858328 containerd[1466]: time="2025-01-13T20:24:46.856858374Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.257977116s" Jan 13 20:24:46.858328 containerd[1466]: time="2025-01-13T20:24:46.856915655Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 20:24:46.861110 containerd[1466]: time="2025-01-13T20:24:46.861063605Z" level=info msg="CreateContainer within sandbox \"1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:24:46.874866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527061273.mount: Deactivated successfully. Jan 13 20:24:46.879944 containerd[1466]: time="2025-01-13T20:24:46.879758044Z" level=info msg="CreateContainer within sandbox \"1d79204ceed34933245e37deaa709c3ac0e5a03ec3ff20c332e4d6df59064394\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"bf504c32837189590bc14b0176687f5942b0b35c72aefd5c030d000ec9501fac\"" Jan 13 20:24:46.881674 containerd[1466]: time="2025-01-13T20:24:46.880582418Z" level=info msg="StartContainer for \"bf504c32837189590bc14b0176687f5942b0b35c72aefd5c030d000ec9501fac\"" Jan 13 20:24:46.921388 systemd[1]: run-containerd-runc-k8s.io-bf504c32837189590bc14b0176687f5942b0b35c72aefd5c030d000ec9501fac-runc.Kiw3V9.mount: Deactivated successfully. Jan 13 20:24:46.930616 systemd[1]: Started cri-containerd-bf504c32837189590bc14b0176687f5942b0b35c72aefd5c030d000ec9501fac.scope - libcontainer container bf504c32837189590bc14b0176687f5942b0b35c72aefd5c030d000ec9501fac. Jan 13 20:24:46.961519 containerd[1466]: time="2025-01-13T20:24:46.961468276Z" level=info msg="StartContainer for \"bf504c32837189590bc14b0176687f5942b0b35c72aefd5c030d000ec9501fac\" returns successfully" Jan 13 20:24:47.583740 kubelet[2002]: E0113 20:24:47.583667 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:47.834715 kubelet[2002]: I0113 20:24:47.834525 2002 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.574294961 podStartE2EDuration="4.834452714s" podCreationTimestamp="2025-01-13 20:24:43 +0000 UTC" firstStartedPulling="2025-01-13 20:24:43.598499611 +0000 UTC m=+27.639410373" lastFinishedPulling="2025-01-13 20:24:46.858657324 +0000 UTC m=+30.899568126" observedRunningTime="2025-01-13 20:24:47.833973346 +0000 UTC m=+31.874884188" watchObservedRunningTime="2025-01-13 20:24:47.834452714 +0000 UTC m=+31.875363556" Jan 13 20:24:48.583956 kubelet[2002]: E0113 20:24:48.583882 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:49.584751 kubelet[2002]: E0113 20:24:49.584674 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:50.584900 kubelet[2002]: E0113 20:24:50.584815 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:51.585393 kubelet[2002]: E0113 20:24:51.585304 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:52.586599 kubelet[2002]: E0113 20:24:52.586443 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:53.586954 kubelet[2002]: E0113 20:24:53.586867 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:53.910075 update_engine[1457]: I20250113 20:24:53.909960 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:24:53.910799 update_engine[1457]: I20250113 20:24:53.910431 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:24:53.910858 update_engine[1457]: I20250113 20:24:53.910823 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:24:53.911367 update_engine[1457]: E20250113 20:24:53.911314 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:24:53.911512 update_engine[1457]: I20250113 20:24:53.911393 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 20:24:54.587467 kubelet[2002]: E0113 20:24:54.587403 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:55.588575 kubelet[2002]: E0113 20:24:55.588486 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:56.280602 kubelet[2002]: I0113 20:24:56.280552 2002 topology_manager.go:215] "Topology Admit Handler" podUID="77f55e70-a8cb-4595-a588-ec5522f2fc13" podNamespace="default" podName="test-pod-1" Jan 13 20:24:56.287726 systemd[1]: Created slice kubepods-besteffort-pod77f55e70_a8cb_4595_a588_ec5522f2fc13.slice - libcontainer container kubepods-besteffort-pod77f55e70_a8cb_4595_a588_ec5522f2fc13.slice. Jan 13 20:24:56.443448 kubelet[2002]: I0113 20:24:56.443391 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lhbx\" (UniqueName: \"kubernetes.io/projected/77f55e70-a8cb-4595-a588-ec5522f2fc13-kube-api-access-2lhbx\") pod \"test-pod-1\" (UID: \"77f55e70-a8cb-4595-a588-ec5522f2fc13\") " pod="default/test-pod-1" Jan 13 20:24:56.443448 kubelet[2002]: I0113 20:24:56.443453 2002 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ccc0c812-48c5-44fb-a1ac-24494ca12c9b\" (UniqueName: \"kubernetes.io/nfs/77f55e70-a8cb-4595-a588-ec5522f2fc13-pvc-ccc0c812-48c5-44fb-a1ac-24494ca12c9b\") pod \"test-pod-1\" (UID: \"77f55e70-a8cb-4595-a588-ec5522f2fc13\") " pod="default/test-pod-1" Jan 13 20:24:56.559391 kubelet[2002]: E0113 20:24:56.558205 2002 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:56.570344 kernel: FS-Cache: Loaded Jan 13 20:24:56.589605 kubelet[2002]: E0113 20:24:56.589534 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:56.596886 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:24:56.597042 kernel: RPC: Registered udp transport module. Jan 13 20:24:56.597084 kernel: RPC: Registered tcp transport module. Jan 13 20:24:56.597120 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:24:56.597156 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:24:56.786539 kernel: NFS: Registering the id_resolver key type Jan 13 20:24:56.786638 kernel: Key type id_resolver registered Jan 13 20:24:56.786668 kernel: Key type id_legacy registered Jan 13 20:24:56.809613 nfsidmap[3559]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:24:56.814301 nfsidmap[3560]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:24:56.892035 containerd[1466]: time="2025-01-13T20:24:56.891972748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:77f55e70-a8cb-4595-a588-ec5522f2fc13,Namespace:default,Attempt:0,}" Jan 13 20:24:57.056305 systemd-networkd[1362]: cali5ec59c6bf6e: Link UP Jan 13 20:24:57.058325 systemd-networkd[1362]: cali5ec59c6bf6e: Gained carrier Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:56.953 [INFO][3565] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.4-k8s-test--pod--1-eth0 default 77f55e70-a8cb-4595-a588-ec5522f2fc13 1651 0 2025-01-13 20:24:45 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.4 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:56.953 [INFO][3565] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:56.985 [INFO][3572] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" HandleID="k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Workload="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.005 [INFO][3572] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" HandleID="k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Workload="10.0.0.4-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003172b0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.4", "pod":"test-pod-1", "timestamp":"2025-01-13 20:24:56.9849913 +0000 UTC"}, Hostname:"10.0.0.4", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.005 [INFO][3572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.005 [INFO][3572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.005 [INFO][3572] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.4' Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.008 [INFO][3572] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.014 [INFO][3572] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.021 [INFO][3572] ipam/ipam.go 489: Trying affinity for 192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.024 [INFO][3572] ipam/ipam.go 155: Attempting to load block cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.027 [INFO][3572] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.027 [INFO][3572] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.030 [INFO][3572] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76 Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.037 [INFO][3572] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.049 [INFO][3572] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.049 [INFO][3572] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" host="10.0.0.4" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.049 [INFO][3572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.049 [INFO][3572] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" HandleID="k8s-pod-network.66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Workload="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.052 [INFO][3565] cni-plugin/k8s.go 386: Populated endpoint ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"77f55e70-a8cb-4595-a588-ec5522f2fc13", ResourceVersion:"1651", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:57.072292 containerd[1466]: 2025-01-13 20:24:57.052 [INFO][3565] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.99.196/32] ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:24:57.074464 containerd[1466]: 2025-01-13 20:24:57.053 [INFO][3565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:24:57.074464 containerd[1466]: 2025-01-13 20:24:57.059 [INFO][3565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:24:57.074464 containerd[1466]: 2025-01-13 20:24:57.060 [INFO][3565] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.4-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"77f55e70-a8cb-4595-a588-ec5522f2fc13", ResourceVersion:"1651", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 24, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.4", ContainerID:"66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"0e:1e:f4:f9:43:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:24:57.074464 containerd[1466]: 2025-01-13 20:24:57.068 [INFO][3565] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.4-k8s-test--pod--1-eth0" Jan 13 20:24:57.098690 containerd[1466]: time="2025-01-13T20:24:57.098582194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:57.098690 containerd[1466]: time="2025-01-13T20:24:57.098645875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:57.098690 containerd[1466]: time="2025-01-13T20:24:57.098658315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:57.100358 containerd[1466]: time="2025-01-13T20:24:57.098764117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:57.126635 systemd[1]: Started cri-containerd-66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76.scope - libcontainer container 66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76. Jan 13 20:24:57.162778 containerd[1466]: time="2025-01-13T20:24:57.162668091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:77f55e70-a8cb-4595-a588-ec5522f2fc13,Namespace:default,Attempt:0,} returns sandbox id \"66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76\"" Jan 13 20:24:57.164785 containerd[1466]: time="2025-01-13T20:24:57.164706882Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:24:57.531626 containerd[1466]: time="2025-01-13T20:24:57.531561916Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:57.532566 containerd[1466]: time="2025-01-13T20:24:57.532475249Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:24:57.536441 containerd[1466]: time="2025-01-13T20:24:57.536404589Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 371.647986ms" Jan 13 20:24:57.536441 containerd[1466]: time="2025-01-13T20:24:57.536443830Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:24:57.538565 containerd[1466]: time="2025-01-13T20:24:57.538496061Z" level=info msg="CreateContainer within sandbox \"66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:24:57.565428 containerd[1466]: time="2025-01-13T20:24:57.565378591Z" level=info msg="CreateContainer within sandbox \"66ffa818d2d9779d07eab793beab1aba02a49ec684e8f214e6f15bd2639abe76\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c3d206f7478cd3a2d0c95616a8688a6cdc86e6819d04f124039dde105c043f94\"" Jan 13 20:24:57.567934 containerd[1466]: time="2025-01-13T20:24:57.566454888Z" level=info msg="StartContainer for \"c3d206f7478cd3a2d0c95616a8688a6cdc86e6819d04f124039dde105c043f94\"" Jan 13 20:24:57.590776 kubelet[2002]: E0113 20:24:57.590739 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:57.602499 systemd[1]: Started cri-containerd-c3d206f7478cd3a2d0c95616a8688a6cdc86e6819d04f124039dde105c043f94.scope - libcontainer container c3d206f7478cd3a2d0c95616a8688a6cdc86e6819d04f124039dde105c043f94. Jan 13 20:24:57.636820 containerd[1466]: time="2025-01-13T20:24:57.636687758Z" level=info msg="StartContainer for \"c3d206f7478cd3a2d0c95616a8688a6cdc86e6819d04f124039dde105c043f94\" returns successfully" Jan 13 20:24:58.591565 kubelet[2002]: E0113 20:24:58.591496 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:58.628148 kubelet[2002]: I0113 20:24:58.628086 2002 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=13.255544311 podStartE2EDuration="13.62802303s" podCreationTimestamp="2025-01-13 20:24:45 +0000 UTC" firstStartedPulling="2025-01-13 20:24:57.164185594 +0000 UTC m=+41.205096396" lastFinishedPulling="2025-01-13 20:24:57.536664313 +0000 UTC m=+41.577575115" observedRunningTime="2025-01-13 20:24:57.860487171 +0000 UTC m=+41.901397973" watchObservedRunningTime="2025-01-13 20:24:58.62802303 +0000 UTC m=+42.668933832" Jan 13 20:24:58.946577 systemd-networkd[1362]: cali5ec59c6bf6e: Gained IPv6LL Jan 13 20:24:59.592238 kubelet[2002]: E0113 20:24:59.592158 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:00.592840 kubelet[2002]: E0113 20:25:00.592784 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:01.593021 kubelet[2002]: E0113 20:25:01.592936 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:02.593517 kubelet[2002]: E0113 20:25:02.593394 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:03.593768 kubelet[2002]: E0113 20:25:03.593698 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:03.911866 update_engine[1457]: I20250113 20:25:03.911490 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:25:03.911866 update_engine[1457]: I20250113 20:25:03.911787 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:25:03.912612 update_engine[1457]: I20250113 20:25:03.912047 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:25:03.912612 update_engine[1457]: E20250113 20:25:03.912487 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:25:03.912714 update_engine[1457]: I20250113 20:25:03.912678 1457 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 13 20:25:04.594256 kubelet[2002]: E0113 20:25:04.594170 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:05.594851 kubelet[2002]: E0113 20:25:05.594773 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:06.595279 kubelet[2002]: E0113 20:25:06.595220 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:07.596187 kubelet[2002]: E0113 20:25:07.596050 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:08.597509 kubelet[2002]: E0113 20:25:08.596912 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:09.597543 kubelet[2002]: E0113 20:25:09.597459 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:10.597759 kubelet[2002]: E0113 20:25:10.597684 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:11.598954 kubelet[2002]: E0113 20:25:11.598874 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:12.599338 kubelet[2002]: E0113 20:25:12.599244 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:13.599863 kubelet[2002]: E0113 20:25:13.599790 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:13.911394 update_engine[1457]: I20250113 20:25:13.910519 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:25:13.911394 update_engine[1457]: I20250113 20:25:13.910895 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:25:13.911394 update_engine[1457]: I20250113 20:25:13.911239 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:25:13.912306 update_engine[1457]: E20250113 20:25:13.912235 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:25:13.912497 update_engine[1457]: I20250113 20:25:13.912463 1457 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:25:13.912610 update_engine[1457]: I20250113 20:25:13.912580 1457 omaha_request_action.cc:617] Omaha request response: Jan 13 20:25:13.912837 update_engine[1457]: E20250113 20:25:13.912806 1457 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 13 20:25:13.912978 update_engine[1457]: I20250113 20:25:13.912947 1457 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913058 1457 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913082 1457 update_attempter.cc:306] Processing Done. Jan 13 20:25:13.913708 update_engine[1457]: E20250113 20:25:13.913107 1457 update_attempter.cc:619] Update failed. Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913120 1457 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913130 1457 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913141 1457 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913275 1457 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913316 1457 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913328 1457 omaha_request_action.cc:272] Request: Jan 13 20:25:13.913708 update_engine[1457]: Jan 13 20:25:13.913708 update_engine[1457]: Jan 13 20:25:13.913708 update_engine[1457]: Jan 13 20:25:13.913708 update_engine[1457]: Jan 13 20:25:13.913708 update_engine[1457]: Jan 13 20:25:13.913708 update_engine[1457]: Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913338 1457 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:25:13.913708 update_engine[1457]: I20250113 20:25:13.913600 1457 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:25:13.914748 locksmithd[1486]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 13 20:25:13.914748 locksmithd[1486]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 13 20:25:13.915169 update_engine[1457]: I20250113 20:25:13.913911 1457 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:25:13.915169 update_engine[1457]: E20250113 20:25:13.914322 1457 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:25:13.915169 update_engine[1457]: I20250113 20:25:13.914361 1457 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:25:13.915169 update_engine[1457]: I20250113 20:25:13.914367 1457 omaha_request_action.cc:617] Omaha request response: Jan 13 20:25:13.915169 update_engine[1457]: I20250113 20:25:13.914372 1457 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:25:13.915169 update_engine[1457]: I20250113 20:25:13.914378 1457 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:25:13.915169 update_engine[1457]: I20250113 20:25:13.914382 1457 update_attempter.cc:306] Processing Done. Jan 13 20:25:13.915169 update_engine[1457]: I20250113 20:25:13.914388 1457 update_attempter.cc:310] Error event sent. Jan 13 20:25:13.915169 update_engine[1457]: I20250113 20:25:13.914395 1457 update_check_scheduler.cc:74] Next update check in 44m16s Jan 13 20:25:14.600778 kubelet[2002]: E0113 20:25:14.600698 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:15.601451 kubelet[2002]: E0113 20:25:15.601368 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:16.557850 kubelet[2002]: E0113 20:25:16.557788 2002 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:16.581008 containerd[1466]: time="2025-01-13T20:25:16.580791720Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:25:16.581008 containerd[1466]: time="2025-01-13T20:25:16.580920202Z" level=info msg="TearDown network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" successfully" Jan 13 20:25:16.581008 containerd[1466]: time="2025-01-13T20:25:16.580933322Z" level=info msg="StopPodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" returns successfully" Jan 13 20:25:16.582110 containerd[1466]: time="2025-01-13T20:25:16.581749173Z" level=info msg="RemovePodSandbox for \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:25:16.582110 containerd[1466]: time="2025-01-13T20:25:16.581783854Z" level=info msg="Forcibly stopping sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\"" Jan 13 20:25:16.582110 containerd[1466]: time="2025-01-13T20:25:16.581851934Z" level=info msg="TearDown network for sandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" successfully" Jan 13 20:25:16.585483 containerd[1466]: time="2025-01-13T20:25:16.585215299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.585483 containerd[1466]: time="2025-01-13T20:25:16.585279300Z" level=info msg="RemovePodSandbox \"f5fa874e5d91bf3e701ae4b7a07a34d0a109c430d0e149c005b982b56c7be069\" returns successfully" Jan 13 20:25:16.586122 containerd[1466]: time="2025-01-13T20:25:16.586051350Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" Jan 13 20:25:16.586376 containerd[1466]: time="2025-01-13T20:25:16.586190392Z" level=info msg="TearDown network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" successfully" Jan 13 20:25:16.586376 containerd[1466]: time="2025-01-13T20:25:16.586202672Z" level=info msg="StopPodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" returns successfully" Jan 13 20:25:16.586801 containerd[1466]: time="2025-01-13T20:25:16.586683239Z" level=info msg="RemovePodSandbox for \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" Jan 13 20:25:16.586801 containerd[1466]: time="2025-01-13T20:25:16.586715079Z" level=info msg="Forcibly stopping sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\"" Jan 13 20:25:16.586801 containerd[1466]: time="2025-01-13T20:25:16.586780880Z" level=info msg="TearDown network for sandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" successfully" Jan 13 20:25:16.589974 containerd[1466]: time="2025-01-13T20:25:16.589937482Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.590286 containerd[1466]: time="2025-01-13T20:25:16.589990483Z" level=info msg="RemovePodSandbox \"c4bee92bca9ed3b82d24cfe029003fdeb901f9180917c1f63a44ca45af10dca0\" returns successfully" Jan 13 20:25:16.590739 containerd[1466]: time="2025-01-13T20:25:16.590597611Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\"" Jan 13 20:25:16.591167 containerd[1466]: time="2025-01-13T20:25:16.590904615Z" level=info msg="TearDown network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" successfully" Jan 13 20:25:16.591167 containerd[1466]: time="2025-01-13T20:25:16.590947416Z" level=info msg="StopPodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" returns successfully" Jan 13 20:25:16.591303 containerd[1466]: time="2025-01-13T20:25:16.591230259Z" level=info msg="RemovePodSandbox for \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\"" Jan 13 20:25:16.591303 containerd[1466]: time="2025-01-13T20:25:16.591253300Z" level=info msg="Forcibly stopping sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\"" Jan 13 20:25:16.591372 containerd[1466]: time="2025-01-13T20:25:16.591333381Z" level=info msg="TearDown network for sandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" successfully" Jan 13 20:25:16.594218 containerd[1466]: time="2025-01-13T20:25:16.594155778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.594218 containerd[1466]: time="2025-01-13T20:25:16.594207659Z" level=info msg="RemovePodSandbox \"f3aea27297bbded47c5cd9495666392e02ce6dab0ba35ce8d80ffe4475cfe75d\" returns successfully" Jan 13 20:25:16.594789 containerd[1466]: time="2025-01-13T20:25:16.594666665Z" level=info msg="StopPodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\"" Jan 13 20:25:16.594789 containerd[1466]: time="2025-01-13T20:25:16.594761586Z" level=info msg="TearDown network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" successfully" Jan 13 20:25:16.594789 containerd[1466]: time="2025-01-13T20:25:16.594774506Z" level=info msg="StopPodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" returns successfully" Jan 13 20:25:16.595400 containerd[1466]: time="2025-01-13T20:25:16.595351754Z" level=info msg="RemovePodSandbox for \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\"" Jan 13 20:25:16.595400 containerd[1466]: time="2025-01-13T20:25:16.595384235Z" level=info msg="Forcibly stopping sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\"" Jan 13 20:25:16.595658 containerd[1466]: time="2025-01-13T20:25:16.595441795Z" level=info msg="TearDown network for sandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" successfully" Jan 13 20:25:16.597985 containerd[1466]: time="2025-01-13T20:25:16.597919508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.597985 containerd[1466]: time="2025-01-13T20:25:16.597977589Z" level=info msg="RemovePodSandbox \"d84043e89adfda133fc7c29aab188a479a40166f0add3804681b93573565e87f\" returns successfully" Jan 13 20:25:16.598408 containerd[1466]: time="2025-01-13T20:25:16.598384714Z" level=info msg="StopPodSandbox for \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\"" Jan 13 20:25:16.598479 containerd[1466]: time="2025-01-13T20:25:16.598464596Z" level=info msg="TearDown network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\" successfully" Jan 13 20:25:16.598479 containerd[1466]: time="2025-01-13T20:25:16.598474676Z" level=info msg="StopPodSandbox for \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\" returns successfully" Jan 13 20:25:16.599297 containerd[1466]: time="2025-01-13T20:25:16.598763279Z" level=info msg="RemovePodSandbox for \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\"" Jan 13 20:25:16.599297 containerd[1466]: time="2025-01-13T20:25:16.598791560Z" level=info msg="Forcibly stopping sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\"" Jan 13 20:25:16.599297 containerd[1466]: time="2025-01-13T20:25:16.598860041Z" level=info msg="TearDown network for sandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\" successfully" Jan 13 20:25:16.601458 containerd[1466]: time="2025-01-13T20:25:16.601423355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.601591 kubelet[2002]: E0113 20:25:16.601564 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:16.601898 containerd[1466]: time="2025-01-13T20:25:16.601572957Z" level=info msg="RemovePodSandbox \"47a40b8433e9b35bef9e4b4ded4dd8066473555dd4c5bb18d86a62266b9937e8\" returns successfully" Jan 13 20:25:16.602378 containerd[1466]: time="2025-01-13T20:25:16.602355167Z" level=info msg="StopPodSandbox for \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\"" Jan 13 20:25:16.602456 containerd[1466]: time="2025-01-13T20:25:16.602441608Z" level=info msg="TearDown network for sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\" successfully" Jan 13 20:25:16.602486 containerd[1466]: time="2025-01-13T20:25:16.602455369Z" level=info msg="StopPodSandbox for \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\" returns successfully" Jan 13 20:25:16.602813 containerd[1466]: time="2025-01-13T20:25:16.602791933Z" level=info msg="RemovePodSandbox for \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\"" Jan 13 20:25:16.602861 containerd[1466]: time="2025-01-13T20:25:16.602818213Z" level=info msg="Forcibly stopping sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\"" Jan 13 20:25:16.602885 containerd[1466]: time="2025-01-13T20:25:16.602878734Z" level=info msg="TearDown network for sandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\" successfully" Jan 13 20:25:16.605517 containerd[1466]: time="2025-01-13T20:25:16.605470209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.605589 containerd[1466]: time="2025-01-13T20:25:16.605525449Z" level=info msg="RemovePodSandbox \"166e55d956ea014c2f3945e130d43b7d42b11c47cd5ea44c50aa732b56be48c2\" returns successfully" Jan 13 20:25:16.605960 containerd[1466]: time="2025-01-13T20:25:16.605862494Z" level=info msg="StopPodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\"" Jan 13 20:25:16.606019 containerd[1466]: time="2025-01-13T20:25:16.605994256Z" level=info msg="TearDown network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" successfully" Jan 13 20:25:16.606019 containerd[1466]: time="2025-01-13T20:25:16.606005216Z" level=info msg="StopPodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" returns successfully" Jan 13 20:25:16.607365 containerd[1466]: time="2025-01-13T20:25:16.606287500Z" level=info msg="RemovePodSandbox for \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\"" Jan 13 20:25:16.607365 containerd[1466]: time="2025-01-13T20:25:16.606310060Z" level=info msg="Forcibly stopping sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\"" Jan 13 20:25:16.607365 containerd[1466]: time="2025-01-13T20:25:16.606382461Z" level=info msg="TearDown network for sandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" successfully" Jan 13 20:25:16.609892 containerd[1466]: time="2025-01-13T20:25:16.609770186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.609892 containerd[1466]: time="2025-01-13T20:25:16.609826987Z" level=info msg="RemovePodSandbox \"19594f852626449173d28d59f48cde779f257deed07d7b45cd73180f6ea6c6f7\" returns successfully" Jan 13 20:25:16.610314 containerd[1466]: time="2025-01-13T20:25:16.610286633Z" level=info msg="StopPodSandbox for \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\"" Jan 13 20:25:16.610422 containerd[1466]: time="2025-01-13T20:25:16.610402434Z" level=info msg="TearDown network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\" successfully" Jan 13 20:25:16.610452 containerd[1466]: time="2025-01-13T20:25:16.610421875Z" level=info msg="StopPodSandbox for \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\" returns successfully" Jan 13 20:25:16.610928 containerd[1466]: time="2025-01-13T20:25:16.610898641Z" level=info msg="RemovePodSandbox for \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\"" Jan 13 20:25:16.610989 containerd[1466]: time="2025-01-13T20:25:16.610935281Z" level=info msg="Forcibly stopping sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\"" Jan 13 20:25:16.611033 containerd[1466]: time="2025-01-13T20:25:16.611013682Z" level=info msg="TearDown network for sandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\" successfully" Jan 13 20:25:16.616226 containerd[1466]: time="2025-01-13T20:25:16.616067750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.616226 containerd[1466]: time="2025-01-13T20:25:16.616128471Z" level=info msg="RemovePodSandbox \"848556a3072abedeca97c7d0d1db7b84fac0d8d6cc104c0c8d05925e609ffa74\" returns successfully" Jan 13 20:25:16.616785 containerd[1466]: time="2025-01-13T20:25:16.616756439Z" level=info msg="StopPodSandbox for \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\"" Jan 13 20:25:16.616872 containerd[1466]: time="2025-01-13T20:25:16.616855280Z" level=info msg="TearDown network for sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\" successfully" Jan 13 20:25:16.616903 containerd[1466]: time="2025-01-13T20:25:16.616869680Z" level=info msg="StopPodSandbox for \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\" returns successfully" Jan 13 20:25:16.617291 containerd[1466]: time="2025-01-13T20:25:16.617227165Z" level=info msg="RemovePodSandbox for \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\"" Jan 13 20:25:16.617291 containerd[1466]: time="2025-01-13T20:25:16.617255606Z" level=info msg="Forcibly stopping sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\"" Jan 13 20:25:16.617390 containerd[1466]: time="2025-01-13T20:25:16.617336647Z" level=info msg="TearDown network for sandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\" successfully" Jan 13 20:25:16.624571 containerd[1466]: time="2025-01-13T20:25:16.624341380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:16.624571 containerd[1466]: time="2025-01-13T20:25:16.624425581Z" level=info msg="RemovePodSandbox \"1060563f145807b067641d6f19b7740b62acd4320f940b30cd238e9960404155\" returns successfully" Jan 13 20:25:16.844997 kubelet[2002]: E0113 20:25:16.844775 2002 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:36184->10.0.0.2:2379: read: connection timed out" Jan 13 20:25:17.601792 kubelet[2002]: E0113 20:25:17.601725 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:18.602941 kubelet[2002]: E0113 20:25:18.602877 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:19.603832 kubelet[2002]: E0113 20:25:19.603759 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:20.604569 kubelet[2002]: E0113 20:25:20.604475 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:21.604994 kubelet[2002]: E0113 20:25:21.604916 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:22.605417 kubelet[2002]: E0113 20:25:22.605329 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:23.606128 kubelet[2002]: E0113 20:25:23.606037 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:24.607205 kubelet[2002]: E0113 20:25:24.607147 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:25.608376 kubelet[2002]: E0113 20:25:25.608311 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:26.608841 kubelet[2002]: E0113 20:25:26.608762 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:26.845716 kubelet[2002]: E0113 20:25:26.845611 2002 controller.go:195] "Failed to update lease" err="Put \"https://138.199.153.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:25:27.609345 kubelet[2002]: E0113 20:25:27.609279 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:28.065168 kubelet[2002]: E0113 20:25:28.065033 2002 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:25:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:25:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:25:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-01-13T20:25:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\\\",\\\"ghcr.io/flatcar/calico/node:v3.29.1\\\"],\\\"sizeBytes\\\":137671624},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\\\",\\\"ghcr.io/flatcar/calico/cni:v3.29.1\\\"],\\\"sizeBytes\\\":91072777},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":87371201},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":67696923},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\\\",\\\"registry.k8s.io/kube-proxy:v1.29.12\\\"],\\\"sizeBytes\\\":25272996},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\\\"],\\\"sizeBytes\\\":11252974},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\\\",\\\"ghcr.io/flatcar/calico/csi:v3.29.1\\\"],\\\"sizeBytes\\\":8834384},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\\\"],\\\"sizeBytes\\\":6487425},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403}]}}\" for node \"10.0.0.4\": Patch \"https://138.199.153.184:6443/api/v1/nodes/10.0.0.4/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:25:28.609663 kubelet[2002]: E0113 20:25:28.609598 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:29.610397 kubelet[2002]: E0113 20:25:29.610305 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:30.610887 kubelet[2002]: E0113 20:25:30.610804 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:31.611748 kubelet[2002]: E0113 20:25:31.611670 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:32.612152 kubelet[2002]: E0113 20:25:32.612083 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:33.612853 kubelet[2002]: E0113 20:25:33.612768 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:34.613838 kubelet[2002]: E0113 20:25:34.613768 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:35.614873 kubelet[2002]: E0113 20:25:35.614782 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:36.557965 kubelet[2002]: E0113 20:25:36.557900 2002 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:36.615771 kubelet[2002]: E0113 20:25:36.615687 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:36.846768 kubelet[2002]: E0113 20:25:36.846174 2002 controller.go:195] "Failed to update lease" err="Put \"https://138.199.153.184:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:25:37.616666 kubelet[2002]: E0113 20:25:37.616580 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:38.066041 kubelet[2002]: E0113 20:25:38.065965 2002 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"10.0.0.4\": Get \"https://138.199.153.184:6443/api/v1/nodes/10.0.0.4?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 13 20:25:38.617154 kubelet[2002]: E0113 20:25:38.617074 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:25:39.618318 kubelet[2002]: E0113 20:25:39.618213 2002 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"