Jan 13 20:16:32.918515 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:16:32.918536 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:16:32.918545 kernel: KASLR enabled Jan 13 20:16:32.918551 kernel: efi: EFI v2.7 by EDK II Jan 13 20:16:32.918557 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 13 20:16:32.918562 kernel: random: crng init done Jan 13 20:16:32.918569 kernel: secureboot: Secure boot disabled Jan 13 20:16:32.918575 kernel: ACPI: Early table checksum verification disabled Jan 13 20:16:32.918581 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:16:32.918588 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:16:32.918594 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918600 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918606 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918612 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918619 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918626 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918633 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918639 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918645 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:32.918652 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:16:32.918658 kernel: NUMA: Failed to initialise from firmware Jan 13 20:16:32.918665 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:16:32.918672 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 13 20:16:32.918678 kernel: Zone ranges: Jan 13 20:16:32.918685 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:16:32.918692 kernel: DMA32 empty Jan 13 20:16:32.918698 kernel: Normal empty Jan 13 20:16:32.918704 kernel: Movable zone start for each node Jan 13 20:16:32.918711 kernel: Early memory node ranges Jan 13 20:16:32.918717 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 20:16:32.918723 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:16:32.918729 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:16:32.918735 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:16:32.918742 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:16:32.918748 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:16:32.918754 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:16:32.918760 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:16:32.918767 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:16:32.918774 kernel: psci: probing for conduit method from ACPI. Jan 13 20:16:32.918780 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:16:32.918789 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:16:32.918796 kernel: psci: Trusted OS migration not required Jan 13 20:16:32.918803 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:16:32.918811 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:16:32.918818 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:16:32.918825 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:16:32.918832 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:16:32.918838 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:16:32.918845 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:16:32.918852 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:16:32.918858 kernel: CPU features: detected: Spectre-v4 Jan 13 20:16:32.918865 kernel: CPU features: detected: Spectre-BHB Jan 13 20:16:32.918871 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:16:32.918879 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:16:32.918886 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:16:32.918893 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:16:32.918899 kernel: alternatives: applying boot alternatives Jan 13 20:16:32.918907 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:32.918914 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:16:32.918921 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:16:32.918928 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:16:32.918934 kernel: Fallback order for Node 0: 0 Jan 13 20:16:32.918941 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:16:32.918948 kernel: Policy zone: DMA Jan 13 20:16:32.918956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:16:32.918963 kernel: software IO TLB: area num 4. Jan 13 20:16:32.918969 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:16:32.918977 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Jan 13 20:16:32.918983 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:16:32.919005 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:16:32.919012 kernel: rcu: RCU event tracing is enabled. Jan 13 20:16:32.919020 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:16:32.919027 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:16:32.919034 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:16:32.919040 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:16:32.919047 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:16:32.919055 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:16:32.919062 kernel: GICv3: 256 SPIs implemented Jan 13 20:16:32.919068 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:16:32.919075 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:16:32.919103 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:16:32.919110 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:16:32.919117 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:16:32.919124 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:16:32.919131 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:16:32.919138 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:16:32.919145 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:16:32.919153 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:16:32.919160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:32.919167 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:16:32.919174 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:16:32.919181 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:16:32.919187 kernel: arm-pv: using stolen time PV Jan 13 20:16:32.919195 kernel: Console: colour dummy device 80x25 Jan 13 20:16:32.919202 kernel: ACPI: Core revision 20230628 Jan 13 20:16:32.919209 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:16:32.919216 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:16:32.919224 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:16:32.919231 kernel: landlock: Up and running. Jan 13 20:16:32.919238 kernel: SELinux: Initializing. Jan 13 20:16:32.919245 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:32.919252 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:32.919258 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:16:32.919265 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:16:32.919272 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:16:32.919279 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:16:32.919287 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:16:32.919294 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:16:32.919300 kernel: Remapping and enabling EFI services. Jan 13 20:16:32.919307 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:16:32.919318 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:16:32.919327 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:16:32.919334 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:16:32.919341 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:32.919348 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:16:32.919354 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:16:32.919363 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:16:32.919371 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:16:32.919383 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:32.919391 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:16:32.919398 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:16:32.919405 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:16:32.919413 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:16:32.919420 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:32.919427 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:16:32.919435 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:16:32.919443 kernel: SMP: Total of 4 processors activated. Jan 13 20:16:32.919450 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:16:32.919457 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:16:32.919465 kernel: CPU features: detected: Common not Private translations Jan 13 20:16:32.919472 kernel: CPU features: detected: CRC32 instructions Jan 13 20:16:32.919479 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:16:32.919486 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:16:32.919495 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:16:32.919502 kernel: CPU features: detected: Privileged Access Never Jan 13 20:16:32.919509 kernel: CPU features: detected: RAS Extension Support Jan 13 20:16:32.919516 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:16:32.919523 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:16:32.919531 kernel: alternatives: applying system-wide alternatives Jan 13 20:16:32.919538 kernel: devtmpfs: initialized Jan 13 20:16:32.919546 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:16:32.919553 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:16:32.919562 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:16:32.919569 kernel: SMBIOS 3.0.0 present. Jan 13 20:16:32.919576 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:16:32.919583 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:16:32.919590 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:16:32.919597 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:16:32.919605 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:16:32.919612 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:16:32.919619 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 13 20:16:32.919627 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:16:32.919635 kernel: cpuidle: using governor menu Jan 13 20:16:32.919642 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:16:32.919649 kernel: ASID allocator initialised with 32768 entries Jan 13 20:16:32.919656 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:16:32.919663 kernel: Serial: AMBA PL011 UART driver Jan 13 20:16:32.919670 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:16:32.919678 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:16:32.919685 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:16:32.919693 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:16:32.919700 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:16:32.919708 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:16:32.919715 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:16:32.919722 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:16:32.919733 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:16:32.919740 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:16:32.919748 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:16:32.919755 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:16:32.919764 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:16:32.919771 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:16:32.919779 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:16:32.919786 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:16:32.919793 kernel: ACPI: Interpreter enabled Jan 13 20:16:32.919801 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:16:32.919808 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:16:32.919815 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:16:32.919822 kernel: printk: console [ttyAMA0] enabled Jan 13 20:16:32.919829 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:16:32.919964 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:16:32.920039 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:16:32.920166 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:16:32.920233 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:16:32.920295 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:16:32.920305 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:16:32.920313 kernel: PCI host bridge to bus 0000:00 Jan 13 20:16:32.920396 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:16:32.920456 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:16:32.920515 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:32.920572 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:16:32.920654 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:16:32.920726 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:16:32.920795 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:16:32.920861 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:16:32.920926 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:32.920990 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:32.921054 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:16:32.921141 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:16:32.921203 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:16:32.921265 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:16:32.921331 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:32.921341 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:16:32.921349 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:16:32.921357 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:16:32.921364 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:16:32.921372 kernel: iommu: Default domain type: Translated Jan 13 20:16:32.921379 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:16:32.921388 kernel: efivars: Registered efivars operations Jan 13 20:16:32.921395 kernel: vgaarb: loaded Jan 13 20:16:32.921403 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:16:32.921410 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:16:32.921417 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:16:32.921424 kernel: pnp: PnP ACPI init Jan 13 20:16:32.921495 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:16:32.921506 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:16:32.921513 kernel: NET: Registered PF_INET protocol family Jan 13 20:16:32.921523 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:16:32.921530 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:16:32.921538 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:16:32.921545 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:16:32.921552 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:16:32.921560 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:16:32.921567 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:32.921574 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:32.921583 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:16:32.921590 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:16:32.921598 kernel: kvm [1]: HYP mode not available Jan 13 20:16:32.921605 kernel: Initialise system trusted keyrings Jan 13 20:16:32.921612 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:16:32.921620 kernel: Key type asymmetric registered Jan 13 20:16:32.921627 kernel: Asymmetric key parser 'x509' registered Jan 13 20:16:32.921634 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:16:32.921641 kernel: io scheduler mq-deadline registered Jan 13 20:16:32.921649 kernel: io scheduler kyber registered Jan 13 20:16:32.921657 kernel: io scheduler bfq registered Jan 13 20:16:32.921664 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:16:32.921672 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:16:32.921679 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:16:32.921744 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:16:32.921754 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:16:32.921761 kernel: thunder_xcv, ver 1.0 Jan 13 20:16:32.921769 kernel: thunder_bgx, ver 1.0 Jan 13 20:16:32.921776 kernel: nicpf, ver 1.0 Jan 13 20:16:32.921784 kernel: nicvf, ver 1.0 Jan 13 20:16:32.921856 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:16:32.921919 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:16:32 UTC (1736799392) Jan 13 20:16:32.921928 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:16:32.921936 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:16:32.921943 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:16:32.921950 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:16:32.921957 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:16:32.921966 kernel: Segment Routing with IPv6 Jan 13 20:16:32.921973 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:16:32.921981 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:16:32.921988 kernel: Key type dns_resolver registered Jan 13 20:16:32.921995 kernel: registered taskstats version 1 Jan 13 20:16:32.922003 kernel: Loading compiled-in X.509 certificates Jan 13 20:16:32.922010 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:16:32.922017 kernel: Key type .fscrypt registered Jan 13 20:16:32.922024 kernel: Key type fscrypt-provisioning registered Jan 13 20:16:32.922033 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:16:32.922040 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:16:32.922048 kernel: ima: No architecture policies found Jan 13 20:16:32.922055 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:16:32.922063 kernel: clk: Disabling unused clocks Jan 13 20:16:32.922070 kernel: Freeing unused kernel memory: 39680K Jan 13 20:16:32.922111 kernel: Run /init as init process Jan 13 20:16:32.922119 kernel: with arguments: Jan 13 20:16:32.922126 kernel: /init Jan 13 20:16:32.922136 kernel: with environment: Jan 13 20:16:32.922143 kernel: HOME=/ Jan 13 20:16:32.922150 kernel: TERM=linux Jan 13 20:16:32.922157 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:16:32.922166 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:32.922175 systemd[1]: Detected virtualization kvm. Jan 13 20:16:32.922183 systemd[1]: Detected architecture arm64. Jan 13 20:16:32.922192 systemd[1]: Running in initrd. Jan 13 20:16:32.922200 systemd[1]: No hostname configured, using default hostname. Jan 13 20:16:32.922208 systemd[1]: Hostname set to . Jan 13 20:16:32.922216 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:32.922224 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:16:32.922232 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:32.922240 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:32.922263 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:16:32.922272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:32.922281 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:16:32.922289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:16:32.922299 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:16:32.922307 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:16:32.922321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:32.922331 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:32.922341 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:32.922349 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:32.922357 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:32.922365 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:32.922373 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:32.922380 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:32.922388 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:32.922396 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:32.922404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:32.922413 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:32.922421 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:32.922429 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:32.922439 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:16:32.922447 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:32.922455 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:16:32.922463 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:16:32.922470 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:32.922479 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:32.922487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:32.922495 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:32.922503 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:32.922510 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:16:32.922519 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:32.922545 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 20:16:32.922563 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:32.922572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:32.922581 systemd-journald[238]: Journal started Jan 13 20:16:32.922600 systemd-journald[238]: Runtime Journal (/run/log/journal/0dc9d120eef546a486e998abd57bd5a6) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:16:32.915951 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 20:16:32.925362 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:32.926702 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:32.931252 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:16:32.931271 kernel: Bridge firewalling registered Jan 13 20:16:32.931546 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 20:16:32.932293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:32.934156 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:32.936096 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:32.939956 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:32.944160 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:32.945740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:32.949310 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:32.951362 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:16:32.953755 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:32.956270 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:32.965041 dracut-cmdline[273]: dracut-dracut-053 Jan 13 20:16:32.969284 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:32.986449 systemd-resolved[276]: Positive Trust Anchors: Jan 13 20:16:32.986527 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:32.986558 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:32.991459 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 13 20:16:32.995248 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:32.996406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:33.032108 kernel: SCSI subsystem initialized Jan 13 20:16:33.037097 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:16:33.045103 kernel: iscsi: registered transport (tcp) Jan 13 20:16:33.059397 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:16:33.059446 kernel: QLogic iSCSI HBA Driver Jan 13 20:16:33.099127 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:33.118225 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:16:33.137059 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:16:33.137214 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:16:33.137251 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:16:33.182112 kernel: raid6: neonx8 gen() 15708 MB/s Jan 13 20:16:33.199101 kernel: raid6: neonx4 gen() 15588 MB/s Jan 13 20:16:33.216097 kernel: raid6: neonx2 gen() 13110 MB/s Jan 13 20:16:33.233098 kernel: raid6: neonx1 gen() 10434 MB/s Jan 13 20:16:33.250099 kernel: raid6: int64x8 gen() 6921 MB/s Jan 13 20:16:33.267098 kernel: raid6: int64x4 gen() 7299 MB/s Jan 13 20:16:33.284098 kernel: raid6: int64x2 gen() 6079 MB/s Jan 13 20:16:33.301285 kernel: raid6: int64x1 gen() 5014 MB/s Jan 13 20:16:33.301301 kernel: raid6: using algorithm neonx8 gen() 15708 MB/s Jan 13 20:16:33.319171 kernel: raid6: .... xor() 11857 MB/s, rmw enabled Jan 13 20:16:33.319183 kernel: raid6: using neon recovery algorithm Jan 13 20:16:33.324102 kernel: xor: measuring software checksum speed Jan 13 20:16:33.325277 kernel: 8regs : 17177 MB/sec Jan 13 20:16:33.325289 kernel: 32regs : 19660 MB/sec Jan 13 20:16:33.326563 kernel: arm64_neon : 24295 MB/sec Jan 13 20:16:33.326576 kernel: xor: using function: arm64_neon (24295 MB/sec) Jan 13 20:16:33.378100 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:16:33.388463 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:33.401233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:33.414747 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 13 20:16:33.417869 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:33.432434 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:16:33.443986 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 13 20:16:33.474780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:33.483267 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:33.522990 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:33.539271 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:16:33.551900 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:33.553599 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:33.555711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:33.556871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:33.564216 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:16:33.572561 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:16:33.581046 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:16:33.581164 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:16:33.581176 kernel: GPT:9289727 != 19775487 Jan 13 20:16:33.581185 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:16:33.581195 kernel: GPT:9289727 != 19775487 Jan 13 20:16:33.581203 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:16:33.581218 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:16:33.576027 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:33.582044 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:33.582162 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:33.584187 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:33.585293 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:33.585438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:33.587636 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:33.599111 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (521) Jan 13 20:16:33.601125 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (511) Jan 13 20:16:33.599487 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:33.610816 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:16:33.615404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:33.622887 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:16:33.627469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:16:33.631325 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:16:33.632574 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:16:33.653222 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:16:33.658114 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:33.660555 disk-uuid[553]: Primary Header is updated. Jan 13 20:16:33.660555 disk-uuid[553]: Secondary Entries is updated. Jan 13 20:16:33.660555 disk-uuid[553]: Secondary Header is updated. Jan 13 20:16:33.663616 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:16:33.681991 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:34.672108 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:16:34.672511 disk-uuid[554]: The operation has completed successfully. Jan 13 20:16:34.692996 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:16:34.693122 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:16:34.721259 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:16:34.724547 sh[573]: Success Jan 13 20:16:34.744104 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:16:34.786498 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:16:34.788426 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:16:34.789328 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:16:34.800234 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:16:34.800269 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:34.801472 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:16:34.801488 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:16:34.803095 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:16:34.806046 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:16:34.807397 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:16:34.808111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:16:34.811004 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:16:34.820586 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:34.820635 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:34.821389 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:16:34.823109 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:16:34.830218 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:16:34.832092 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:34.837661 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:16:34.844218 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:16:34.899398 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:34.912246 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:34.945732 systemd-networkd[764]: lo: Link UP Jan 13 20:16:34.945743 systemd-networkd[764]: lo: Gained carrier Jan 13 20:16:34.946500 systemd-networkd[764]: Enumeration completed Jan 13 20:16:34.946918 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:34.946920 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:34.947700 systemd-networkd[764]: eth0: Link UP Jan 13 20:16:34.947704 systemd-networkd[764]: eth0: Gained carrier Jan 13 20:16:34.947711 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:34.948272 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:34.949532 systemd[1]: Reached target network.target - Network. Jan 13 20:16:34.958923 ignition[674]: Ignition 2.20.0 Jan 13 20:16:34.958929 ignition[674]: Stage: fetch-offline Jan 13 20:16:34.958961 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:34.958970 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:34.959143 ignition[674]: parsed url from cmdline: "" Jan 13 20:16:34.959146 ignition[674]: no config URL provided Jan 13 20:16:34.959151 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:34.959159 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:34.959187 ignition[674]: op(1): [started] loading QEMU firmware config module Jan 13 20:16:34.959193 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:16:34.967152 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:34.966071 ignition[674]: op(1): [finished] loading QEMU firmware config module Jan 13 20:16:35.006288 ignition[674]: parsing config with SHA512: ad1c48891fec2abb0752808c56880a87ca0ae24d52d621f730348b05b56993e92eeeea116643cd1e7b429cf24482340637488a236add176ba2d48b83b1fb5a8b Jan 13 20:16:35.012561 unknown[674]: fetched base config from "system" Jan 13 20:16:35.012577 unknown[674]: fetched user config from "qemu" Jan 13 20:16:35.014041 ignition[674]: fetch-offline: fetch-offline passed Jan 13 20:16:35.015755 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:35.014149 ignition[674]: Ignition finished successfully Jan 13 20:16:35.017286 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:16:35.023260 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:16:35.034054 ignition[771]: Ignition 2.20.0 Jan 13 20:16:35.034063 ignition[771]: Stage: kargs Jan 13 20:16:35.034246 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:35.034256 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:35.037224 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:16:35.035270 ignition[771]: kargs: kargs passed Jan 13 20:16:35.040279 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:16:35.035323 ignition[771]: Ignition finished successfully Jan 13 20:16:35.053162 ignition[780]: Ignition 2.20.0 Jan 13 20:16:35.053173 ignition[780]: Stage: disks Jan 13 20:16:35.053329 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:35.053338 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:35.054206 ignition[780]: disks: disks passed Jan 13 20:16:35.056148 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:16:35.054257 ignition[780]: Ignition finished successfully Jan 13 20:16:35.057530 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:35.058985 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:35.060958 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:35.062531 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:35.064379 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:35.072221 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:16:35.083379 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:16:35.088051 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:16:35.098201 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:16:35.140097 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:16:35.140539 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:16:35.141784 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:35.149181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:35.150907 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:16:35.152123 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:16:35.152163 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:16:35.159839 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) Jan 13 20:16:35.152229 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:35.164558 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:35.164577 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:35.164587 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:16:35.156504 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:16:35.158539 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:16:35.168417 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:16:35.169228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:35.203425 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:16:35.207035 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:16:35.210234 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:16:35.213746 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:16:35.284348 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:35.296228 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:16:35.298816 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:16:35.304093 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:35.317068 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:16:35.319146 ignition[913]: INFO : Ignition 2.20.0 Jan 13 20:16:35.319146 ignition[913]: INFO : Stage: mount Jan 13 20:16:35.320603 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:35.320603 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:35.320603 ignition[913]: INFO : mount: mount passed Jan 13 20:16:35.320603 ignition[913]: INFO : Ignition finished successfully Jan 13 20:16:35.322703 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:16:35.334222 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:16:35.798902 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:16:35.808235 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:35.815093 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Jan 13 20:16:35.817350 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:35.817365 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:35.817375 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:16:35.820096 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:16:35.821388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:35.849031 ignition[944]: INFO : Ignition 2.20.0 Jan 13 20:16:35.849031 ignition[944]: INFO : Stage: files Jan 13 20:16:35.850690 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:35.850690 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:35.850690 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:16:35.854360 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:16:35.854360 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:16:35.854360 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:16:35.854360 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:16:35.854360 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:16:35.853731 unknown[944]: wrote ssh authorized keys file for user: core Jan 13 20:16:35.862384 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:35.862384 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:35.909273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:16:36.225880 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:36.225880 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:36.229964 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 20:16:36.245527 systemd-networkd[764]: eth0: Gained IPv6LL Jan 13 20:16:36.592536 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:16:36.843361 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:16:36.843361 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:16:36.847386 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:36.847386 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:36.847386 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:16:36.847386 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:16:36.847386 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:16:36.847386 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:16:36.847386 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:16:36.847386 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:16:36.868082 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:16:36.871959 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:16:36.874574 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:16:36.874574 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:36.874574 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:36.874574 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:36.874574 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:36.874574 ignition[944]: INFO : files: files passed Jan 13 20:16:36.874574 ignition[944]: INFO : Ignition finished successfully Jan 13 20:16:36.876540 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:16:36.894287 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:16:36.897278 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:16:36.900103 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:16:36.900195 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:16:36.905853 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:16:36.908117 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:36.908117 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:36.911513 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:36.912118 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:36.914613 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:16:36.927254 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:16:36.946397 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:16:36.946503 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:16:36.948794 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:16:36.950775 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:16:36.952674 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:16:36.953422 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:16:36.969125 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:36.984275 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:16:36.993324 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:36.995511 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:36.996734 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:16:36.998508 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:16:36.998626 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:37.001143 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:16:37.003152 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:16:37.004813 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:16:37.006529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:37.008443 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:37.010376 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:16:37.012145 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:37.014170 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:16:37.016135 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:16:37.017873 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:16:37.019389 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:16:37.019512 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:37.021818 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:37.023755 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:37.025666 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:16:37.029140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:37.030380 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:16:37.030493 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:37.033256 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:16:37.033384 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:37.035439 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:16:37.036963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:16:37.038184 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:37.040044 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:16:37.041797 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:16:37.043971 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:16:37.044064 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:37.045593 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:16:37.045677 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:37.047237 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:16:37.047358 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:37.049041 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:16:37.049162 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:16:37.065314 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:16:37.067051 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:16:37.067223 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:37.072057 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:16:37.074400 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:16:37.077018 ignition[1000]: INFO : Ignition 2.20.0 Jan 13 20:16:37.077018 ignition[1000]: INFO : Stage: umount Jan 13 20:16:37.077018 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:37.077018 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:16:37.077018 ignition[1000]: INFO : umount: umount passed Jan 13 20:16:37.077018 ignition[1000]: INFO : Ignition finished successfully Jan 13 20:16:37.074552 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:37.075941 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:16:37.076046 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:37.080285 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:16:37.081139 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:16:37.084118 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:16:37.084215 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:16:37.088822 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:16:37.089227 systemd[1]: Stopped target network.target - Network. Jan 13 20:16:37.090174 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:16:37.090233 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:16:37.091255 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:16:37.091311 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:16:37.093223 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:16:37.093267 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:16:37.095143 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:16:37.095191 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:37.098226 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:16:37.101239 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:16:37.105526 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:16:37.105611 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:16:37.107175 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:16:37.107258 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:37.110433 systemd-networkd[764]: eth0: DHCPv6 lease lost Jan 13 20:16:37.111228 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:16:37.111395 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:16:37.114183 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:16:37.114337 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:16:37.117015 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:16:37.117072 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:37.129190 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:16:37.130723 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:16:37.130792 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:37.133026 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:16:37.133088 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:37.134994 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:16:37.135051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:37.137168 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:16:37.137226 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:37.139424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:37.153471 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:16:37.154529 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:16:37.160237 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:16:37.160472 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:37.164177 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:16:37.164237 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:37.166315 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:16:37.166363 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:37.168235 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:16:37.168295 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:37.171042 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:16:37.171113 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:37.173922 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:37.173977 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:37.188236 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:16:37.189327 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:16:37.189403 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:37.191553 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:37.191605 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:37.193827 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:16:37.193913 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:16:37.196227 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:16:37.198718 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:16:37.209334 systemd[1]: Switching root. Jan 13 20:16:37.239250 systemd-journald[238]: Journal stopped Jan 13 20:16:37.951645 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 20:16:37.951698 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:16:37.951710 kernel: SELinux: policy capability open_perms=1 Jan 13 20:16:37.951720 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:16:37.951732 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:16:37.951741 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:16:37.951751 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:16:37.951761 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:16:37.951771 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:16:37.951780 kernel: audit: type=1403 audit(1736799397.378:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:16:37.951791 systemd[1]: Successfully loaded SELinux policy in 32.532ms. Jan 13 20:16:37.951811 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.293ms. Jan 13 20:16:37.951828 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:37.951841 systemd[1]: Detected virtualization kvm. Jan 13 20:16:37.951851 systemd[1]: Detected architecture arm64. Jan 13 20:16:37.951862 systemd[1]: Detected first boot. Jan 13 20:16:37.951872 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:37.951882 zram_generator::config[1044]: No configuration found. Jan 13 20:16:37.951897 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:16:37.951910 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:16:37.951921 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:16:37.951933 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:16:37.951944 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:16:37.951954 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:16:37.951965 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:16:37.951975 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:16:37.951985 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:16:37.951996 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:16:37.952006 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:16:37.952016 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:16:37.952028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:37.952039 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:37.952049 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:16:37.952061 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:16:37.952072 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:16:37.952094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:37.952107 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:16:37.952117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:37.952128 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:16:37.952140 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:16:37.952150 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:37.952161 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:16:37.952172 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:37.952182 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:37.952193 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:37.952203 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:37.952213 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:16:37.952225 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:16:37.952236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:37.952246 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:37.952256 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:37.952266 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:16:37.952278 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:16:37.952289 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:16:37.952305 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:16:37.952317 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:16:37.952329 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:16:37.952339 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:16:37.952349 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:16:37.952360 systemd[1]: Reached target machines.target - Containers. Jan 13 20:16:37.952370 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:16:37.952380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:37.952391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:37.952401 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:16:37.952413 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:37.952423 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:37.952433 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:37.952444 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:16:37.952457 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:37.952467 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:16:37.952481 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:16:37.952492 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:16:37.952503 kernel: fuse: init (API version 7.39) Jan 13 20:16:37.952514 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:16:37.952524 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:16:37.952535 kernel: loop: module loaded Jan 13 20:16:37.952545 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:37.952555 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:37.952565 kernel: ACPI: bus type drm_connector registered Jan 13 20:16:37.952575 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:16:37.952585 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:16:37.952614 systemd-journald[1118]: Collecting audit messages is disabled. Jan 13 20:16:37.952638 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:37.952649 systemd-journald[1118]: Journal started Jan 13 20:16:37.952671 systemd-journald[1118]: Runtime Journal (/run/log/journal/0dc9d120eef546a486e998abd57bd5a6) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:16:37.745057 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:16:37.767252 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:16:37.767634 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:16:37.955593 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:16:37.955625 systemd[1]: Stopped verity-setup.service. Jan 13 20:16:37.959858 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:37.960564 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:16:37.961770 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:16:37.963015 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:16:37.964211 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:16:37.965460 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:16:37.966830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:16:37.968179 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:16:37.969634 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:37.971292 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:16:37.971454 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:16:37.972954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:37.973165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:37.974601 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:37.974748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:37.976265 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:37.976412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:37.977925 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:16:37.978093 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:16:37.979494 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:37.979643 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:37.981344 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:37.982839 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:16:37.984516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:16:37.997675 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:16:38.008205 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:16:38.010498 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:16:38.011692 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:16:38.011747 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:38.013896 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:16:38.016239 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:16:38.018449 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:16:38.019599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:38.021231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:16:38.023212 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:16:38.024540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:38.028275 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:16:38.032108 systemd-journald[1118]: Time spent on flushing to /var/log/journal/0dc9d120eef546a486e998abd57bd5a6 is 21.693ms for 851 entries. Jan 13 20:16:38.032108 systemd-journald[1118]: System Journal (/var/log/journal/0dc9d120eef546a486e998abd57bd5a6) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:16:38.067292 systemd-journald[1118]: Received client request to flush runtime journal. Jan 13 20:16:38.067364 kernel: loop0: detected capacity change from 0 to 113536 Jan 13 20:16:38.031377 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:38.034415 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:38.037001 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:16:38.042362 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:16:38.045208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:38.046831 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:16:38.048320 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:16:38.051528 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:16:38.053256 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:16:38.058325 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:16:38.069612 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:16:38.078618 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:16:38.075075 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:16:38.076750 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:16:38.078448 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:38.090234 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 20:16:38.095041 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:16:38.103099 kernel: loop1: detected capacity change from 0 to 194096 Jan 13 20:16:38.107385 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:38.110766 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:16:38.113111 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:16:38.133099 kernel: loop2: detected capacity change from 0 to 116808 Jan 13 20:16:38.134359 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 13 20:16:38.134375 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jan 13 20:16:38.144974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:38.171120 kernel: loop3: detected capacity change from 0 to 113536 Jan 13 20:16:38.175243 kernel: loop4: detected capacity change from 0 to 194096 Jan 13 20:16:38.181119 kernel: loop5: detected capacity change from 0 to 116808 Jan 13 20:16:38.183788 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:16:38.184176 (sd-merge)[1180]: Merged extensions into '/usr'. Jan 13 20:16:38.189062 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:16:38.189091 systemd[1]: Reloading... Jan 13 20:16:38.243973 zram_generator::config[1203]: No configuration found. Jan 13 20:16:38.314161 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:16:38.354435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:38.395488 systemd[1]: Reloading finished in 205 ms. Jan 13 20:16:38.425998 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:16:38.427598 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:16:38.441261 systemd[1]: Starting ensure-sysext.service... Jan 13 20:16:38.443310 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:38.451703 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:16:38.451723 systemd[1]: Reloading... Jan 13 20:16:38.465280 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:16:38.465541 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:16:38.466311 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:16:38.466535 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 13 20:16:38.466585 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jan 13 20:16:38.468556 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:38.468568 systemd-tmpfiles[1242]: Skipping /boot Jan 13 20:16:38.475791 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:38.475811 systemd-tmpfiles[1242]: Skipping /boot Jan 13 20:16:38.508314 zram_generator::config[1269]: No configuration found. Jan 13 20:16:38.598474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:38.639588 systemd[1]: Reloading finished in 187 ms. Jan 13 20:16:38.654583 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:16:38.667578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:38.675413 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:38.678105 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:16:38.680726 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:16:38.684282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:38.689617 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:38.693427 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:16:38.697113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:38.699444 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:38.704664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:38.709261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:38.710404 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:38.712206 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:16:38.714273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:38.714428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:38.718381 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:16:38.720565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:38.720694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:38.722566 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:38.722707 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:38.729580 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jan 13 20:16:38.730929 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:38.738439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:38.745248 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:38.749816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:38.751225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:38.756439 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:16:38.758574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:38.764186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:38.764358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:38.766511 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:38.766686 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:38.768711 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:38.768874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:38.780015 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:16:38.782108 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:16:38.802146 systemd[1]: Finished ensure-sysext.service. Jan 13 20:16:38.803294 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:16:38.808788 augenrules[1371]: No rules Jan 13 20:16:38.812422 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:16:38.814061 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:38.815163 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:38.826113 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1347) Jan 13 20:16:38.828675 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:16:38.829878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:38.836308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:38.839107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:38.842280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:38.844766 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:38.846254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:38.849528 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:38.852925 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:16:38.854451 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:38.854882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:38.855049 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:38.863072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:38.863234 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:38.864776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:38.868719 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:38.868904 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:38.873018 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:38.873274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:38.878755 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:38.909659 systemd-resolved[1308]: Positive Trust Anchors: Jan 13 20:16:38.909735 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:38.909764 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:38.921477 systemd-resolved[1308]: Defaulting to hostname 'linux'. Jan 13 20:16:38.928910 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:38.930226 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:38.932500 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:16:38.934957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:16:38.936489 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:16:38.944224 systemd-networkd[1385]: lo: Link UP Jan 13 20:16:38.944237 systemd-networkd[1385]: lo: Gained carrier Jan 13 20:16:38.945001 systemd-networkd[1385]: Enumeration completed Jan 13 20:16:38.947234 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:16:38.948206 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:38.948214 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:38.948531 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:38.951515 systemd[1]: Reached target network.target - Network. Jan 13 20:16:38.953783 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:16:38.954558 systemd-networkd[1385]: eth0: Link UP Jan 13 20:16:38.954565 systemd-networkd[1385]: eth0: Gained carrier Jan 13 20:16:38.954579 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:38.957446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:38.961654 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:16:38.971450 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:16:38.977548 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:38.980345 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Jan 13 20:16:38.981161 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:16:38.981205 systemd-timesyncd[1386]: Initial clock synchronization to Mon 2025-01-13 20:16:38.767262 UTC. Jan 13 20:16:38.981358 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:16:39.000962 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:39.008648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:39.035091 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:16:39.037109 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:39.038326 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:39.039518 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:16:39.040784 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:16:39.042233 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:16:39.043555 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:16:39.044809 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:16:39.046065 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:16:39.046109 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:39.046997 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:39.048818 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:16:39.051304 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:16:39.062155 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:16:39.064628 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:16:39.066276 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:16:39.067478 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:39.068452 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:39.069449 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:39.069481 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:16:39.070403 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:16:39.072444 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:16:39.075202 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:39.076257 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:16:39.078224 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:16:39.079522 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:16:39.082356 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:16:39.084331 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:16:39.091047 jq[1417]: false Jan 13 20:16:39.092054 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:16:39.095232 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:16:39.098964 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:16:39.104350 extend-filesystems[1418]: Found loop3 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found loop4 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found loop5 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found vda Jan 13 20:16:39.104350 extend-filesystems[1418]: Found vda1 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found vda2 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found vda3 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found usr Jan 13 20:16:39.104350 extend-filesystems[1418]: Found vda4 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found vda6 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found vda7 Jan 13 20:16:39.104350 extend-filesystems[1418]: Found vda9 Jan 13 20:16:39.104350 extend-filesystems[1418]: Checking size of /dev/vda9 Jan 13 20:16:39.103722 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:16:39.115344 dbus-daemon[1416]: [system] SELinux support is enabled Jan 13 20:16:39.104112 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:16:39.122116 jq[1434]: true Jan 13 20:16:39.107186 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:16:39.112164 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:16:39.114413 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:16:39.117498 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:16:39.123129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:16:39.123338 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:16:39.127391 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:16:39.127540 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:16:39.129541 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:16:39.129701 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:16:39.139491 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:16:39.139572 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:16:39.141048 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:16:39.141089 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:16:39.148536 extend-filesystems[1418]: Resized partition /dev/vda9 Jan 13 20:16:39.148429 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:16:39.152379 tar[1438]: linux-arm64/helm Jan 13 20:16:39.157759 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:16:39.161295 jq[1439]: true Jan 13 20:16:39.169292 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1347) Jan 13 20:16:39.169315 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:16:39.171390 systemd-logind[1426]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:16:39.171636 systemd-logind[1426]: New seat seat0. Jan 13 20:16:39.179715 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:16:39.207116 update_engine[1433]: I20250113 20:16:39.205837 1433 main.cc:92] Flatcar Update Engine starting Jan 13 20:16:39.211091 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:16:39.219418 update_engine[1433]: I20250113 20:16:39.219336 1433 update_check_scheduler.cc:74] Next update check in 7m10s Jan 13 20:16:39.221237 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:16:39.226424 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:16:39.226424 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:16:39.226424 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:16:39.231736 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Jan 13 20:16:39.231755 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:16:39.234041 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:16:39.236252 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:16:39.237968 bash[1469]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:16:39.238236 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:16:39.243637 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:16:39.292121 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:16:39.395492 containerd[1450]: time="2025-01-13T20:16:39.395153918Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:16:39.417846 containerd[1450]: time="2025-01-13T20:16:39.417798627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419233511Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419293854Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419311295Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419451681Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419467759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419517902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419529231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419676663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419689900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419701307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420288 containerd[1450]: time="2025-01-13T20:16:39.419710105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420503 containerd[1450]: time="2025-01-13T20:16:39.419787422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420503 containerd[1450]: time="2025-01-13T20:16:39.419977640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420503 containerd[1450]: time="2025-01-13T20:16:39.420099183Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:16:39.420503 containerd[1450]: time="2025-01-13T20:16:39.420113432Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:16:39.420503 containerd[1450]: time="2025-01-13T20:16:39.420187440Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:16:39.420503 containerd[1450]: time="2025-01-13T20:16:39.420224930Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:16:39.424325 containerd[1450]: time="2025-01-13T20:16:39.424298331Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:16:39.424444 containerd[1450]: time="2025-01-13T20:16:39.424429101Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:16:39.424568 containerd[1450]: time="2025-01-13T20:16:39.424550099Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:16:39.424646 containerd[1450]: time="2025-01-13T20:16:39.424625586Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:16:39.424749 containerd[1450]: time="2025-01-13T20:16:39.424733192Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:16:39.424939 containerd[1450]: time="2025-01-13T20:16:39.424918776Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:16:39.425438 containerd[1450]: time="2025-01-13T20:16:39.425414720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:16:39.425698 containerd[1450]: time="2025-01-13T20:16:39.425674896Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:16:39.425854 containerd[1450]: time="2025-01-13T20:16:39.425835916Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:16:39.425918 containerd[1450]: time="2025-01-13T20:16:39.425905135Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:16:39.426020 containerd[1450]: time="2025-01-13T20:16:39.426005733Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:16:39.426106 containerd[1450]: time="2025-01-13T20:16:39.426073863Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:16:39.426222 containerd[1450]: time="2025-01-13T20:16:39.426207085Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:16:39.426284 containerd[1450]: time="2025-01-13T20:16:39.426271049Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:16:39.426382 containerd[1450]: time="2025-01-13T20:16:39.426367637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:16:39.426437 containerd[1450]: time="2025-01-13T20:16:39.426425605Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:16:39.426500 containerd[1450]: time="2025-01-13T20:16:39.426487817Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:16:39.426615 containerd[1450]: time="2025-01-13T20:16:39.426582614Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:16:39.426749 containerd[1450]: time="2025-01-13T20:16:39.426732733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.426811 containerd[1450]: time="2025-01-13T20:16:39.426798604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.426907 containerd[1450]: time="2025-01-13T20:16:39.426892778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.426979 containerd[1450]: time="2025-01-13T20:16:39.426964334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.427197 containerd[1450]: time="2025-01-13T20:16:39.427023821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.427197 containerd[1450]: time="2025-01-13T20:16:39.427122238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.427197 containerd[1450]: time="2025-01-13T20:16:39.427142911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428267 containerd[1450]: time="2025-01-13T20:16:39.428229595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428369 containerd[1450]: time="2025-01-13T20:16:39.428280166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428369 containerd[1450]: time="2025-01-13T20:16:39.428305705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428369 containerd[1450]: time="2025-01-13T20:16:39.428320421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428369 containerd[1450]: time="2025-01-13T20:16:39.428338368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428369 containerd[1450]: time="2025-01-13T20:16:39.428356549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428467 containerd[1450]: time="2025-01-13T20:16:39.428375742Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:16:39.428467 containerd[1450]: time="2025-01-13T20:16:39.428405057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428467 containerd[1450]: time="2025-01-13T20:16:39.428423550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428467 containerd[1450]: time="2025-01-13T20:16:39.428437526Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:16:39.428730 containerd[1450]: time="2025-01-13T20:16:39.428686919Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:16:39.428730 containerd[1450]: time="2025-01-13T20:16:39.428716467Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:16:39.428730 containerd[1450]: time="2025-01-13T20:16:39.428727718Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:16:39.428801 containerd[1450]: time="2025-01-13T20:16:39.428744615Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:16:39.428801 containerd[1450]: time="2025-01-13T20:16:39.428757384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.428801 containerd[1450]: time="2025-01-13T20:16:39.428772918Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:16:39.428801 containerd[1450]: time="2025-01-13T20:16:39.428785453Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:16:39.428801 containerd[1450]: time="2025-01-13T20:16:39.428796665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:16:39.429433 containerd[1450]: time="2025-01-13T20:16:39.429376038Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:16:39.429800 containerd[1450]: time="2025-01-13T20:16:39.429551267Z" level=info msg="Connect containerd service" Jan 13 20:16:39.429800 containerd[1450]: time="2025-01-13T20:16:39.429598607Z" level=info msg="using legacy CRI server" Jan 13 20:16:39.429800 containerd[1450]: time="2025-01-13T20:16:39.429607951Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:16:39.430548 containerd[1450]: time="2025-01-13T20:16:39.430365083Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:16:39.431001 containerd[1450]: time="2025-01-13T20:16:39.430977236Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:16:39.431325 containerd[1450]: time="2025-01-13T20:16:39.431287789Z" level=info msg="Start subscribing containerd event" Jan 13 20:16:39.431440 containerd[1450]: time="2025-01-13T20:16:39.431425917Z" level=info msg="Start recovering state" Jan 13 20:16:39.431523 containerd[1450]: time="2025-01-13T20:16:39.431499497Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:16:39.431613 containerd[1450]: time="2025-01-13T20:16:39.431544890Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:16:39.431824 containerd[1450]: time="2025-01-13T20:16:39.431795062Z" level=info msg="Start event monitor" Jan 13 20:16:39.431897 containerd[1450]: time="2025-01-13T20:16:39.431883903Z" level=info msg="Start snapshots syncer" Jan 13 20:16:39.432117 containerd[1450]: time="2025-01-13T20:16:39.432015607Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:16:39.432117 containerd[1450]: time="2025-01-13T20:16:39.432032191Z" level=info msg="Start streaming server" Jan 13 20:16:39.432600 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:16:39.433910 containerd[1450]: time="2025-01-13T20:16:39.433677420Z" level=info msg="containerd successfully booted in 0.040334s" Jan 13 20:16:39.532108 tar[1438]: linux-arm64/LICENSE Jan 13 20:16:39.532262 tar[1438]: linux-arm64/README.md Jan 13 20:16:39.544492 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:16:39.961071 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:16:39.979637 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:16:39.996429 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:16:40.002340 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:16:40.002641 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:16:40.006925 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:16:40.018678 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:16:40.021835 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:16:40.024153 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:16:40.025873 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:16:40.341227 systemd-networkd[1385]: eth0: Gained IPv6LL Jan 13 20:16:40.343648 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:16:40.345619 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:16:40.355302 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:16:40.357602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:16:40.359700 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:16:40.373986 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:16:40.374264 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:16:40.377604 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:16:40.386672 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:16:40.841180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:16:40.842755 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:16:40.845168 systemd[1]: Startup finished in 552ms (kernel) + 4.672s (initrd) + 3.503s (userspace) = 8.728s. Jan 13 20:16:40.845208 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:16:41.308092 kubelet[1528]: E0113 20:16:41.308033 1528 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:16:41.310737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:16:41.310896 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:16:45.109865 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:16:45.111043 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:58686.service - OpenSSH per-connection server daemon (10.0.0.1:58686). Jan 13 20:16:45.181854 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 58686 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:45.183478 sshd-session[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:45.193706 systemd-logind[1426]: New session 1 of user core. Jan 13 20:16:45.194710 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:16:45.205339 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:16:45.214619 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:16:45.217832 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:16:45.224649 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:16:45.296354 systemd[1546]: Queued start job for default target default.target. Jan 13 20:16:45.308977 systemd[1546]: Created slice app.slice - User Application Slice. Jan 13 20:16:45.309029 systemd[1546]: Reached target paths.target - Paths. Jan 13 20:16:45.309042 systemd[1546]: Reached target timers.target - Timers. Jan 13 20:16:45.310318 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:16:45.319652 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:16:45.319713 systemd[1546]: Reached target sockets.target - Sockets. Jan 13 20:16:45.319724 systemd[1546]: Reached target basic.target - Basic System. Jan 13 20:16:45.319759 systemd[1546]: Reached target default.target - Main User Target. Jan 13 20:16:45.319783 systemd[1546]: Startup finished in 89ms. Jan 13 20:16:45.320053 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:16:45.321395 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:16:45.381553 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:58700.service - OpenSSH per-connection server daemon (10.0.0.1:58700). Jan 13 20:16:45.426027 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 58700 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:45.427400 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:45.431216 systemd-logind[1426]: New session 2 of user core. Jan 13 20:16:45.440233 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:16:45.492437 sshd[1559]: Connection closed by 10.0.0.1 port 58700 Jan 13 20:16:45.496156 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:58700.service: Deactivated successfully. Jan 13 20:16:45.492893 sshd-session[1557]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:45.497817 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:16:45.499058 systemd-logind[1426]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:16:45.511303 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:58702.service - OpenSSH per-connection server daemon (10.0.0.1:58702). Jan 13 20:16:45.511763 systemd-logind[1426]: Removed session 2. Jan 13 20:16:45.549616 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 58702 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:45.550750 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:45.554559 systemd-logind[1426]: New session 3 of user core. Jan 13 20:16:45.567229 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:16:45.614827 sshd[1566]: Connection closed by 10.0.0.1 port 58702 Jan 13 20:16:45.615338 sshd-session[1564]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:45.626517 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:58702.service: Deactivated successfully. Jan 13 20:16:45.627815 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:16:45.630234 systemd-logind[1426]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:16:45.631328 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:58708.service - OpenSSH per-connection server daemon (10.0.0.1:58708). Jan 13 20:16:45.632100 systemd-logind[1426]: Removed session 3. Jan 13 20:16:45.669562 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 58708 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:45.670724 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:45.674600 systemd-logind[1426]: New session 4 of user core. Jan 13 20:16:45.691237 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:16:45.741952 sshd[1573]: Connection closed by 10.0.0.1 port 58708 Jan 13 20:16:45.742247 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:45.756776 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:58708.service: Deactivated successfully. Jan 13 20:16:45.758259 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:16:45.760556 systemd-logind[1426]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:16:45.762363 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:58716.service - OpenSSH per-connection server daemon (10.0.0.1:58716). Jan 13 20:16:45.763124 systemd-logind[1426]: Removed session 4. Jan 13 20:16:45.804413 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 58716 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:45.804792 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:45.808938 systemd-logind[1426]: New session 5 of user core. Jan 13 20:16:45.826264 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:16:45.894675 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:16:45.894979 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:16:45.912269 sudo[1581]: pam_unix(sudo:session): session closed for user root Jan 13 20:16:45.913862 sshd[1580]: Connection closed by 10.0.0.1 port 58716 Jan 13 20:16:45.914499 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:45.934030 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:58716.service: Deactivated successfully. Jan 13 20:16:45.938549 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:16:45.939870 systemd-logind[1426]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:16:45.947394 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:58724.service - OpenSSH per-connection server daemon (10.0.0.1:58724). Jan 13 20:16:45.948659 systemd-logind[1426]: Removed session 5. Jan 13 20:16:45.985753 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 58724 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:45.987350 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:45.991179 systemd-logind[1426]: New session 6 of user core. Jan 13 20:16:46.004258 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:16:46.054705 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:16:46.054981 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:16:46.058017 sudo[1590]: pam_unix(sudo:session): session closed for user root Jan 13 20:16:46.062778 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:16:46.063118 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:16:46.081414 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:46.108586 augenrules[1612]: No rules Jan 13 20:16:46.109949 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:46.110195 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:46.111287 sudo[1589]: pam_unix(sudo:session): session closed for user root Jan 13 20:16:46.112598 sshd[1588]: Connection closed by 10.0.0.1 port 58724 Jan 13 20:16:46.112996 sshd-session[1586]: pam_unix(sshd:session): session closed for user core Jan 13 20:16:46.123799 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:58724.service: Deactivated successfully. Jan 13 20:16:46.125518 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:16:46.126891 systemd-logind[1426]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:16:46.128044 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:58736.service - OpenSSH per-connection server daemon (10.0.0.1:58736). Jan 13 20:16:46.128801 systemd-logind[1426]: Removed session 6. Jan 13 20:16:46.172574 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 58736 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:16:46.173925 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:16:46.178149 systemd-logind[1426]: New session 7 of user core. Jan 13 20:16:46.184242 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:16:46.235061 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:16:46.235372 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:16:46.562342 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:16:46.562481 (dockerd)[1643]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:16:46.850316 dockerd[1643]: time="2025-01-13T20:16:46.850193662Z" level=info msg="Starting up" Jan 13 20:16:46.994836 dockerd[1643]: time="2025-01-13T20:16:46.994739316Z" level=info msg="Loading containers: start." Jan 13 20:16:47.129130 kernel: Initializing XFRM netlink socket Jan 13 20:16:47.199820 systemd-networkd[1385]: docker0: Link UP Jan 13 20:16:47.242416 dockerd[1643]: time="2025-01-13T20:16:47.242309227Z" level=info msg="Loading containers: done." Jan 13 20:16:47.255781 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck142430822-merged.mount: Deactivated successfully. Jan 13 20:16:47.257982 dockerd[1643]: time="2025-01-13T20:16:47.257937201Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:16:47.258066 dockerd[1643]: time="2025-01-13T20:16:47.258047340Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:16:47.258226 dockerd[1643]: time="2025-01-13T20:16:47.258173372Z" level=info msg="Daemon has completed initialization" Jan 13 20:16:47.296947 dockerd[1643]: time="2025-01-13T20:16:47.296886296Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:16:47.297792 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:16:47.977542 containerd[1450]: time="2025-01-13T20:16:47.977491352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 13 20:16:48.702018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709057667.mount: Deactivated successfully. Jan 13 20:16:51.035261 containerd[1450]: time="2025-01-13T20:16:51.035208217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:51.035681 containerd[1450]: time="2025-01-13T20:16:51.035635982Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864012" Jan 13 20:16:51.036667 containerd[1450]: time="2025-01-13T20:16:51.036624035Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:51.039348 containerd[1450]: time="2025-01-13T20:16:51.039316704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:51.040707 containerd[1450]: time="2025-01-13T20:16:51.040473166Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 3.062931113s" Jan 13 20:16:51.040707 containerd[1450]: time="2025-01-13T20:16:51.040515337Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Jan 13 20:16:51.058392 containerd[1450]: time="2025-01-13T20:16:51.058364707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 13 20:16:51.404463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:16:51.414239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:16:51.506409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:16:51.510307 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:16:51.552314 kubelet[1917]: E0113 20:16:51.552260 1917 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:16:51.555030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:16:51.555180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:16:53.613054 containerd[1450]: time="2025-01-13T20:16:53.613008940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:53.614085 containerd[1450]: time="2025-01-13T20:16:53.613513412Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900696" Jan 13 20:16:53.615254 containerd[1450]: time="2025-01-13T20:16:53.615212860Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:53.618334 containerd[1450]: time="2025-01-13T20:16:53.618293592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:53.619054 containerd[1450]: time="2025-01-13T20:16:53.619016202Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.560615475s" Jan 13 20:16:53.619111 containerd[1450]: time="2025-01-13T20:16:53.619054723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Jan 13 20:16:53.638242 containerd[1450]: time="2025-01-13T20:16:53.638172854Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 13 20:16:55.008838 containerd[1450]: time="2025-01-13T20:16:55.008603526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:55.012566 containerd[1450]: time="2025-01-13T20:16:55.012499595Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164334" Jan 13 20:16:55.014044 containerd[1450]: time="2025-01-13T20:16:55.013997613Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:55.016625 containerd[1450]: time="2025-01-13T20:16:55.016595232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:55.018000 containerd[1450]: time="2025-01-13T20:16:55.017862421Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.379649364s" Jan 13 20:16:55.018000 containerd[1450]: time="2025-01-13T20:16:55.017898507Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Jan 13 20:16:55.037427 containerd[1450]: time="2025-01-13T20:16:55.037386428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:16:56.149043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052978853.mount: Deactivated successfully. Jan 13 20:16:56.436421 containerd[1450]: time="2025-01-13T20:16:56.436370967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:56.437755 containerd[1450]: time="2025-01-13T20:16:56.437709023Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Jan 13 20:16:56.438706 containerd[1450]: time="2025-01-13T20:16:56.438659951Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:56.441162 containerd[1450]: time="2025-01-13T20:16:56.441130831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:56.441871 containerd[1450]: time="2025-01-13T20:16:56.441837315Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.404411727s" Jan 13 20:16:56.441918 containerd[1450]: time="2025-01-13T20:16:56.441870343Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 20:16:56.460439 containerd[1450]: time="2025-01-13T20:16:56.460407068Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:16:57.070712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930472160.mount: Deactivated successfully. Jan 13 20:16:57.829971 containerd[1450]: time="2025-01-13T20:16:57.829905092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:57.830510 containerd[1450]: time="2025-01-13T20:16:57.830432495Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 20:16:57.831462 containerd[1450]: time="2025-01-13T20:16:57.831430398Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:57.835880 containerd[1450]: time="2025-01-13T20:16:57.835827591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:57.837064 containerd[1450]: time="2025-01-13T20:16:57.836634717Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.376177584s" Jan 13 20:16:57.837064 containerd[1450]: time="2025-01-13T20:16:57.836667398Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:16:57.859424 containerd[1450]: time="2025-01-13T20:16:57.859375093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:16:58.317919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701912543.mount: Deactivated successfully. Jan 13 20:16:58.322220 containerd[1450]: time="2025-01-13T20:16:58.322166076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:58.322696 containerd[1450]: time="2025-01-13T20:16:58.322650011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 20:16:58.323494 containerd[1450]: time="2025-01-13T20:16:58.323469915Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:58.326509 containerd[1450]: time="2025-01-13T20:16:58.326456629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:16:58.327352 containerd[1450]: time="2025-01-13T20:16:58.327220291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 467.801183ms" Jan 13 20:16:58.327352 containerd[1450]: time="2025-01-13T20:16:58.327251026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:16:58.346477 containerd[1450]: time="2025-01-13T20:16:58.346383821Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 13 20:16:58.928128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341892879.mount: Deactivated successfully. Jan 13 20:17:01.506102 containerd[1450]: time="2025-01-13T20:17:01.505956460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.506830 containerd[1450]: time="2025-01-13T20:17:01.506782769Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 13 20:17:01.508093 containerd[1450]: time="2025-01-13T20:17:01.507655172Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.510877 containerd[1450]: time="2025-01-13T20:17:01.510842772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:01.512082 containerd[1450]: time="2025-01-13T20:17:01.512043429Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.165620285s" Jan 13 20:17:01.512132 containerd[1450]: time="2025-01-13T20:17:01.512088605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 13 20:17:01.654432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:17:01.664304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:01.754649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:01.758215 (kubelet)[2085]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:01.796146 kubelet[2085]: E0113 20:17:01.796104 2085 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:01.799473 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:01.799608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:06.617855 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:06.634356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:06.650769 systemd[1]: Reloading requested from client PID 2166 ('systemctl') (unit session-7.scope)... Jan 13 20:17:06.650791 systemd[1]: Reloading... Jan 13 20:17:06.721105 zram_generator::config[2204]: No configuration found. Jan 13 20:17:06.845247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:17:06.901650 systemd[1]: Reloading finished in 250 ms. Jan 13 20:17:06.947236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:06.949960 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:06.951313 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:17:06.952161 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:06.953752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:07.044352 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:07.047849 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:17:07.087812 kubelet[2252]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:17:07.087812 kubelet[2252]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:17:07.087812 kubelet[2252]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:17:07.088187 kubelet[2252]: I0113 20:17:07.087984 2252 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:17:08.061013 kubelet[2252]: I0113 20:17:08.060965 2252 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:17:08.061013 kubelet[2252]: I0113 20:17:08.061000 2252 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:17:08.061243 kubelet[2252]: I0113 20:17:08.061216 2252 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:17:08.086146 kubelet[2252]: E0113 20:17:08.086121 2252 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.086264 kubelet[2252]: I0113 20:17:08.086161 2252 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:17:08.094742 kubelet[2252]: I0113 20:17:08.094668 2252 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:17:08.095018 kubelet[2252]: I0113 20:17:08.094978 2252 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:17:08.095189 kubelet[2252]: I0113 20:17:08.095002 2252 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:17:08.095272 kubelet[2252]: I0113 20:17:08.095251 2252 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:17:08.095272 kubelet[2252]: I0113 20:17:08.095260 2252 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:17:08.095521 kubelet[2252]: I0113 20:17:08.095493 2252 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:17:08.098327 kubelet[2252]: I0113 20:17:08.098299 2252 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:17:08.098327 kubelet[2252]: I0113 20:17:08.098325 2252 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:17:08.098688 kubelet[2252]: I0113 20:17:08.098594 2252 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:17:08.098688 kubelet[2252]: I0113 20:17:08.098671 2252 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:17:08.098950 kubelet[2252]: W0113 20:17:08.098894 2252 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.099001 kubelet[2252]: E0113 20:17:08.098954 2252 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.099221 kubelet[2252]: W0113 20:17:08.099182 2252 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.099348 kubelet[2252]: E0113 20:17:08.099307 2252 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.099652 kubelet[2252]: I0113 20:17:08.099627 2252 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:17:08.100004 kubelet[2252]: I0113 20:17:08.099989 2252 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:17:08.100479 kubelet[2252]: W0113 20:17:08.100204 2252 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:17:08.101288 kubelet[2252]: I0113 20:17:08.101262 2252 server.go:1264] "Started kubelet" Jan 13 20:17:08.102215 kubelet[2252]: I0113 20:17:08.102180 2252 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:17:08.103989 kubelet[2252]: I0113 20:17:08.103964 2252 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:17:08.104821 kubelet[2252]: I0113 20:17:08.104800 2252 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:17:08.105353 kubelet[2252]: I0113 20:17:08.105334 2252 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:17:08.105559 kubelet[2252]: I0113 20:17:08.105546 2252 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:17:08.109503 kubelet[2252]: I0113 20:17:08.106249 2252 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:17:08.114027 kubelet[2252]: I0113 20:17:08.113965 2252 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:17:08.114227 kubelet[2252]: I0113 20:17:08.114206 2252 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:17:08.115397 kubelet[2252]: W0113 20:17:08.115356 2252 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.115631 kubelet[2252]: E0113 20:17:08.115499 2252 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.115631 kubelet[2252]: E0113 20:17:08.115567 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Jan 13 20:17:08.115968 kubelet[2252]: E0113 20:17:08.115733 2252 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a59e0be1377f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:17:08.101240825 +0000 UTC m=+1.050390646,LastTimestamp:2025-01-13 20:17:08.101240825 +0000 UTC m=+1.050390646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:17:08.117166 kubelet[2252]: I0113 20:17:08.117059 2252 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:17:08.118052 kubelet[2252]: I0113 20:17:08.117966 2252 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:17:08.118052 kubelet[2252]: I0113 20:17:08.117984 2252 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:17:08.119156 kubelet[2252]: E0113 20:17:08.119134 2252 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:17:08.128120 kubelet[2252]: I0113 20:17:08.128072 2252 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:17:08.128120 kubelet[2252]: I0113 20:17:08.128098 2252 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:17:08.128453 kubelet[2252]: I0113 20:17:08.128225 2252 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:17:08.129798 kubelet[2252]: I0113 20:17:08.129755 2252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:17:08.130772 kubelet[2252]: I0113 20:17:08.130708 2252 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:17:08.130772 kubelet[2252]: I0113 20:17:08.130740 2252 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:17:08.130772 kubelet[2252]: I0113 20:17:08.130756 2252 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:17:08.130881 kubelet[2252]: E0113 20:17:08.130791 2252 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:17:08.131318 kubelet[2252]: W0113 20:17:08.131068 2252 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.131318 kubelet[2252]: E0113 20:17:08.131181 2252 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:08.195455 kubelet[2252]: I0113 20:17:08.195404 2252 policy_none.go:49] "None policy: Start" Jan 13 20:17:08.197053 kubelet[2252]: I0113 20:17:08.196699 2252 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:17:08.197053 kubelet[2252]: I0113 20:17:08.196732 2252 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:17:08.203853 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:17:08.206367 kubelet[2252]: I0113 20:17:08.206343 2252 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:08.206967 kubelet[2252]: E0113 20:17:08.206939 2252 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jan 13 20:17:08.211839 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:17:08.214351 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:17:08.223968 kubelet[2252]: I0113 20:17:08.223823 2252 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:17:08.224064 kubelet[2252]: I0113 20:17:08.224027 2252 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:17:08.224172 kubelet[2252]: I0113 20:17:08.224151 2252 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:17:08.225602 kubelet[2252]: E0113 20:17:08.225570 2252 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:17:08.231862 kubelet[2252]: I0113 20:17:08.231817 2252 topology_manager.go:215] "Topology Admit Handler" podUID="ae7379b464085eaa18fbfb27954779c1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:17:08.232934 kubelet[2252]: I0113 20:17:08.232895 2252 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:17:08.233899 kubelet[2252]: I0113 20:17:08.233873 2252 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:17:08.239008 systemd[1]: Created slice kubepods-burstable-podae7379b464085eaa18fbfb27954779c1.slice - libcontainer container kubepods-burstable-podae7379b464085eaa18fbfb27954779c1.slice. Jan 13 20:17:08.256013 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice - libcontainer container kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Jan 13 20:17:08.268555 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice - libcontainer container kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Jan 13 20:17:08.316588 kubelet[2252]: E0113 20:17:08.316482 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Jan 13 20:17:08.406718 kubelet[2252]: I0113 20:17:08.406668 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:08.406718 kubelet[2252]: I0113 20:17:08.406706 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:08.406718 kubelet[2252]: I0113 20:17:08.406727 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:08.406863 kubelet[2252]: I0113 20:17:08.406742 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae7379b464085eaa18fbfb27954779c1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae7379b464085eaa18fbfb27954779c1\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:08.406863 kubelet[2252]: I0113 20:17:08.406758 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae7379b464085eaa18fbfb27954779c1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae7379b464085eaa18fbfb27954779c1\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:08.406863 kubelet[2252]: I0113 20:17:08.406774 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae7379b464085eaa18fbfb27954779c1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae7379b464085eaa18fbfb27954779c1\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:08.406863 kubelet[2252]: I0113 20:17:08.406791 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:17:08.406863 kubelet[2252]: I0113 20:17:08.406804 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:08.406964 kubelet[2252]: I0113 20:17:08.406819 2252 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:08.408661 kubelet[2252]: I0113 20:17:08.408630 2252 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:08.408930 kubelet[2252]: E0113 20:17:08.408892 2252 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jan 13 20:17:08.555062 kubelet[2252]: E0113 20:17:08.555025 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:08.555970 containerd[1450]: time="2025-01-13T20:17:08.555708902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae7379b464085eaa18fbfb27954779c1,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:08.567498 kubelet[2252]: E0113 20:17:08.567408 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:08.567886 containerd[1450]: time="2025-01-13T20:17:08.567849667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:08.571225 kubelet[2252]: E0113 20:17:08.571191 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:08.571606 containerd[1450]: time="2025-01-13T20:17:08.571568358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:08.717097 kubelet[2252]: E0113 20:17:08.717028 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Jan 13 20:17:08.810288 kubelet[2252]: I0113 20:17:08.810248 2252 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:08.810611 kubelet[2252]: E0113 20:17:08.810569 2252 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Jan 13 20:17:08.993515 kubelet[2252]: E0113 20:17:08.993408 2252 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a59e0be1377f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:17:08.101240825 +0000 UTC m=+1.050390646,LastTimestamp:2025-01-13 20:17:08.101240825 +0000 UTC m=+1.050390646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:17:09.043711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2951152817.mount: Deactivated successfully. Jan 13 20:17:09.049133 containerd[1450]: time="2025-01-13T20:17:09.048927050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:09.050434 containerd[1450]: time="2025-01-13T20:17:09.050393376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:17:09.052612 containerd[1450]: time="2025-01-13T20:17:09.052332192Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:09.053137 containerd[1450]: time="2025-01-13T20:17:09.053107375Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:09.054321 containerd[1450]: time="2025-01-13T20:17:09.054059072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:17:09.056164 containerd[1450]: time="2025-01-13T20:17:09.055569097Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:09.056934 containerd[1450]: time="2025-01-13T20:17:09.056911243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:17:09.058071 containerd[1450]: time="2025-01-13T20:17:09.057882530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.095272ms" Jan 13 20:17:09.058071 containerd[1450]: time="2025-01-13T20:17:09.058008589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:17:09.060604 containerd[1450]: time="2025-01-13T20:17:09.060560267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.378944ms" Jan 13 20:17:09.062158 containerd[1450]: time="2025-01-13T20:17:09.061919965Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.294518ms" Jan 13 20:17:09.202166 containerd[1450]: time="2025-01-13T20:17:09.202057550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:09.202353 containerd[1450]: time="2025-01-13T20:17:09.201773009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:09.202402 containerd[1450]: time="2025-01-13T20:17:09.202350887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:09.202402 containerd[1450]: time="2025-01-13T20:17:09.202363881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:09.202515 containerd[1450]: time="2025-01-13T20:17:09.202437725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:09.202972 containerd[1450]: time="2025-01-13T20:17:09.202927966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:09.203119 containerd[1450]: time="2025-01-13T20:17:09.203034275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:09.203332 containerd[1450]: time="2025-01-13T20:17:09.203241014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:09.206468 containerd[1450]: time="2025-01-13T20:17:09.206323034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:09.206549 containerd[1450]: time="2025-01-13T20:17:09.206445134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:09.206549 containerd[1450]: time="2025-01-13T20:17:09.206460887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:09.206673 containerd[1450]: time="2025-01-13T20:17:09.206625527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:09.219288 systemd[1]: Started cri-containerd-95f3dc3393fcfec5e9ad90fa3807c450861bcdd807a7058fbc39fdf75003433d.scope - libcontainer container 95f3dc3393fcfec5e9ad90fa3807c450861bcdd807a7058fbc39fdf75003433d. Jan 13 20:17:09.225155 systemd[1]: Started cri-containerd-26b15b2b81e081e559495eb42994ddf7fc2938a8e9b411939f83bc2ceb2bc967.scope - libcontainer container 26b15b2b81e081e559495eb42994ddf7fc2938a8e9b411939f83bc2ceb2bc967. Jan 13 20:17:09.226137 systemd[1]: Started cri-containerd-5f0e16dd00a9c3c0fd18bfb177a91fede8b9df21d379f8ddcfee10c3da9a31a9.scope - libcontainer container 5f0e16dd00a9c3c0fd18bfb177a91fede8b9df21d379f8ddcfee10c3da9a31a9. Jan 13 20:17:09.255708 kubelet[2252]: W0113 20:17:09.255552 2252 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:09.255708 kubelet[2252]: E0113 20:17:09.255619 2252 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:09.256752 containerd[1450]: time="2025-01-13T20:17:09.256693435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"95f3dc3393fcfec5e9ad90fa3807c450861bcdd807a7058fbc39fdf75003433d\"" Jan 13 20:17:09.258518 kubelet[2252]: E0113 20:17:09.258491 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:09.259380 containerd[1450]: time="2025-01-13T20:17:09.259127770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ae7379b464085eaa18fbfb27954779c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f0e16dd00a9c3c0fd18bfb177a91fede8b9df21d379f8ddcfee10c3da9a31a9\"" Jan 13 20:17:09.261125 kubelet[2252]: E0113 20:17:09.260933 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:09.263132 containerd[1450]: time="2025-01-13T20:17:09.263100236Z" level=info msg="CreateContainer within sandbox \"95f3dc3393fcfec5e9ad90fa3807c450861bcdd807a7058fbc39fdf75003433d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:17:09.264206 containerd[1450]: time="2025-01-13T20:17:09.264142089Z" level=info msg="CreateContainer within sandbox \"5f0e16dd00a9c3c0fd18bfb177a91fede8b9df21d379f8ddcfee10c3da9a31a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:17:09.267716 containerd[1450]: time="2025-01-13T20:17:09.267658537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"26b15b2b81e081e559495eb42994ddf7fc2938a8e9b411939f83bc2ceb2bc967\"" Jan 13 20:17:09.268551 kubelet[2252]: E0113 20:17:09.268433 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:09.270547 containerd[1450]: time="2025-01-13T20:17:09.270514107Z" level=info msg="CreateContainer within sandbox \"26b15b2b81e081e559495eb42994ddf7fc2938a8e9b411939f83bc2ceb2bc967\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:17:09.279380 containerd[1450]: time="2025-01-13T20:17:09.279316063Z" level=info msg="CreateContainer within sandbox \"95f3dc3393fcfec5e9ad90fa3807c450861bcdd807a7058fbc39fdf75003433d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f55418208840c331b53a2ab71b4bb415b47f2882ca20245366af44db59f77fbc\"" Jan 13 20:17:09.280094 containerd[1450]: time="2025-01-13T20:17:09.280063299Z" level=info msg="StartContainer for \"f55418208840c331b53a2ab71b4bb415b47f2882ca20245366af44db59f77fbc\"" Jan 13 20:17:09.283210 containerd[1450]: time="2025-01-13T20:17:09.283171506Z" level=info msg="CreateContainer within sandbox \"5f0e16dd00a9c3c0fd18bfb177a91fede8b9df21d379f8ddcfee10c3da9a31a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6629ff3622f36085c03cc8e0aa92fd78b792ff5baf243170cfee358cf75d25d\"" Jan 13 20:17:09.283751 containerd[1450]: time="2025-01-13T20:17:09.283722638Z" level=info msg="StartContainer for \"c6629ff3622f36085c03cc8e0aa92fd78b792ff5baf243170cfee358cf75d25d\"" Jan 13 20:17:09.285787 containerd[1450]: time="2025-01-13T20:17:09.285704233Z" level=info msg="CreateContainer within sandbox \"26b15b2b81e081e559495eb42994ddf7fc2938a8e9b411939f83bc2ceb2bc967\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d2e47ba445d0fa703c1bda6f4660f66c5bee12b73cc61ce9d74b0e2d38499d2e\"" Jan 13 20:17:09.286049 containerd[1450]: time="2025-01-13T20:17:09.286019640Z" level=info msg="StartContainer for \"d2e47ba445d0fa703c1bda6f4660f66c5bee12b73cc61ce9d74b0e2d38499d2e\"" Jan 13 20:17:09.306255 systemd[1]: Started cri-containerd-f55418208840c331b53a2ab71b4bb415b47f2882ca20245366af44db59f77fbc.scope - libcontainer container f55418208840c331b53a2ab71b4bb415b47f2882ca20245366af44db59f77fbc. Jan 13 20:17:09.309629 systemd[1]: Started cri-containerd-c6629ff3622f36085c03cc8e0aa92fd78b792ff5baf243170cfee358cf75d25d.scope - libcontainer container c6629ff3622f36085c03cc8e0aa92fd78b792ff5baf243170cfee358cf75d25d. Jan 13 20:17:09.310466 systemd[1]: Started cri-containerd-d2e47ba445d0fa703c1bda6f4660f66c5bee12b73cc61ce9d74b0e2d38499d2e.scope - libcontainer container d2e47ba445d0fa703c1bda6f4660f66c5bee12b73cc61ce9d74b0e2d38499d2e. Jan 13 20:17:09.341515 containerd[1450]: time="2025-01-13T20:17:09.341442182Z" level=info msg="StartContainer for \"f55418208840c331b53a2ab71b4bb415b47f2882ca20245366af44db59f77fbc\" returns successfully" Jan 13 20:17:09.371113 containerd[1450]: time="2025-01-13T20:17:09.371041014Z" level=info msg="StartContainer for \"c6629ff3622f36085c03cc8e0aa92fd78b792ff5baf243170cfee358cf75d25d\" returns successfully" Jan 13 20:17:09.371245 containerd[1450]: time="2025-01-13T20:17:09.371218167Z" level=info msg="StartContainer for \"d2e47ba445d0fa703c1bda6f4660f66c5bee12b73cc61ce9d74b0e2d38499d2e\" returns successfully" Jan 13 20:17:09.519283 kubelet[2252]: E0113 20:17:09.518334 2252 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="1.6s" Jan 13 20:17:09.539765 kubelet[2252]: W0113 20:17:09.539701 2252 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:09.539765 kubelet[2252]: E0113 20:17:09.539773 2252 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:09.554116 kubelet[2252]: W0113 20:17:09.554059 2252 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:09.554116 kubelet[2252]: E0113 20:17:09.554121 2252 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Jan 13 20:17:09.613416 kubelet[2252]: I0113 20:17:09.612637 2252 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:10.143637 kubelet[2252]: E0113 20:17:10.143534 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:10.145262 kubelet[2252]: E0113 20:17:10.145239 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:10.147122 kubelet[2252]: E0113 20:17:10.147102 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:11.073376 kubelet[2252]: I0113 20:17:11.073213 2252 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:17:11.082662 kubelet[2252]: E0113 20:17:11.082632 2252 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:11.149442 kubelet[2252]: E0113 20:17:11.149408 2252 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:11.183738 kubelet[2252]: E0113 20:17:11.183703 2252 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:11.284323 kubelet[2252]: E0113 20:17:11.284284 2252 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:11.384850 kubelet[2252]: E0113 20:17:11.384744 2252 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 20:17:12.101092 kubelet[2252]: I0113 20:17:12.101050 2252 apiserver.go:52] "Watching apiserver" Jan 13 20:17:12.105616 kubelet[2252]: I0113 20:17:12.105569 2252 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:17:13.012633 systemd[1]: Reloading requested from client PID 2528 ('systemctl') (unit session-7.scope)... Jan 13 20:17:13.012927 systemd[1]: Reloading... Jan 13 20:17:13.080102 zram_generator::config[2570]: No configuration found. Jan 13 20:17:13.246895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:17:13.317707 systemd[1]: Reloading finished in 304 ms. Jan 13 20:17:13.354272 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:13.364507 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:17:13.364696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:13.364749 systemd[1]: kubelet.service: Consumed 1.405s CPU time, 116.3M memory peak, 0B memory swap peak. Jan 13 20:17:13.384729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:13.482993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:13.488495 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:17:13.538359 kubelet[2609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:17:13.539518 kubelet[2609]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:17:13.539518 kubelet[2609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:17:13.539518 kubelet[2609]: I0113 20:17:13.538747 2609 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:17:13.544096 kubelet[2609]: I0113 20:17:13.544041 2609 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:17:13.544096 kubelet[2609]: I0113 20:17:13.544070 2609 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:17:13.544291 kubelet[2609]: I0113 20:17:13.544272 2609 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:17:13.546530 kubelet[2609]: I0113 20:17:13.546497 2609 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:17:13.550447 kubelet[2609]: I0113 20:17:13.550274 2609 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:17:13.555720 kubelet[2609]: I0113 20:17:13.555686 2609 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:17:13.555949 kubelet[2609]: I0113 20:17:13.555906 2609 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:17:13.556140 kubelet[2609]: I0113 20:17:13.555937 2609 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:17:13.556283 kubelet[2609]: I0113 20:17:13.556146 2609 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:17:13.556283 kubelet[2609]: I0113 20:17:13.556156 2609 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:17:13.556283 kubelet[2609]: I0113 20:17:13.556201 2609 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:17:13.558202 kubelet[2609]: I0113 20:17:13.556315 2609 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:17:13.558202 kubelet[2609]: I0113 20:17:13.556328 2609 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:17:13.558202 kubelet[2609]: I0113 20:17:13.556355 2609 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:17:13.558202 kubelet[2609]: I0113 20:17:13.556368 2609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:17:13.560759 kubelet[2609]: I0113 20:17:13.560720 2609 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:17:13.562106 kubelet[2609]: I0113 20:17:13.560921 2609 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:17:13.562570 kubelet[2609]: I0113 20:17:13.562551 2609 server.go:1264] "Started kubelet" Jan 13 20:17:13.564567 kubelet[2609]: I0113 20:17:13.564298 2609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:17:13.564567 kubelet[2609]: I0113 20:17:13.564330 2609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:17:13.567350 kubelet[2609]: I0113 20:17:13.567316 2609 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:17:13.567451 kubelet[2609]: I0113 20:17:13.567371 2609 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:17:13.573143 kubelet[2609]: I0113 20:17:13.570794 2609 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:17:13.573143 kubelet[2609]: I0113 20:17:13.571559 2609 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:17:13.573143 kubelet[2609]: I0113 20:17:13.572135 2609 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:17:13.573143 kubelet[2609]: I0113 20:17:13.572289 2609 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:17:13.585058 kubelet[2609]: I0113 20:17:13.585025 2609 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:17:13.587098 kubelet[2609]: I0113 20:17:13.585213 2609 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:17:13.587098 kubelet[2609]: I0113 20:17:13.585295 2609 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:17:13.587098 kubelet[2609]: I0113 20:17:13.585507 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:17:13.587098 kubelet[2609]: I0113 20:17:13.586638 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:17:13.587098 kubelet[2609]: I0113 20:17:13.586670 2609 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:17:13.587098 kubelet[2609]: I0113 20:17:13.586690 2609 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:17:13.587098 kubelet[2609]: E0113 20:17:13.586729 2609 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:17:13.588584 kubelet[2609]: E0113 20:17:13.588554 2609 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:17:13.620321 kubelet[2609]: I0113 20:17:13.620292 2609 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:17:13.620321 kubelet[2609]: I0113 20:17:13.620312 2609 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:17:13.620321 kubelet[2609]: I0113 20:17:13.620338 2609 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:17:13.620936 kubelet[2609]: I0113 20:17:13.620906 2609 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:17:13.620974 kubelet[2609]: I0113 20:17:13.620936 2609 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:17:13.620974 kubelet[2609]: I0113 20:17:13.620963 2609 policy_none.go:49] "None policy: Start" Jan 13 20:17:13.621754 kubelet[2609]: I0113 20:17:13.621714 2609 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:17:13.621754 kubelet[2609]: I0113 20:17:13.621752 2609 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:17:13.622280 kubelet[2609]: I0113 20:17:13.622256 2609 state_mem.go:75] "Updated machine memory state" Jan 13 20:17:13.626320 kubelet[2609]: I0113 20:17:13.625959 2609 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:17:13.626320 kubelet[2609]: I0113 20:17:13.626146 2609 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:17:13.626320 kubelet[2609]: I0113 20:17:13.626240 2609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:17:13.675697 kubelet[2609]: I0113 20:17:13.675457 2609 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:17:13.685233 kubelet[2609]: I0113 20:17:13.684097 2609 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:17:13.685361 kubelet[2609]: I0113 20:17:13.685291 2609 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:17:13.687053 kubelet[2609]: I0113 20:17:13.687012 2609 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:17:13.689376 kubelet[2609]: I0113 20:17:13.689259 2609 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:17:13.689376 kubelet[2609]: I0113 20:17:13.689325 2609 topology_manager.go:215] "Topology Admit Handler" podUID="ae7379b464085eaa18fbfb27954779c1" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:17:13.873907 kubelet[2609]: I0113 20:17:13.873259 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:13.873907 kubelet[2609]: I0113 20:17:13.873348 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:17:13.873907 kubelet[2609]: I0113 20:17:13.873372 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ae7379b464085eaa18fbfb27954779c1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae7379b464085eaa18fbfb27954779c1\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:13.873907 kubelet[2609]: I0113 20:17:13.873391 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:13.873907 kubelet[2609]: I0113 20:17:13.873436 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:13.874152 kubelet[2609]: I0113 20:17:13.873453 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:13.874152 kubelet[2609]: I0113 20:17:13.873509 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ae7379b464085eaa18fbfb27954779c1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ae7379b464085eaa18fbfb27954779c1\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:13.874152 kubelet[2609]: I0113 20:17:13.873527 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ae7379b464085eaa18fbfb27954779c1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ae7379b464085eaa18fbfb27954779c1\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:17:13.874152 kubelet[2609]: I0113 20:17:13.873544 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:17:14.016486 kubelet[2609]: E0113 20:17:14.016232 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:14.016486 kubelet[2609]: E0113 20:17:14.016379 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:14.017532 kubelet[2609]: E0113 20:17:14.017489 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:14.557393 kubelet[2609]: I0113 20:17:14.557348 2609 apiserver.go:52] "Watching apiserver" Jan 13 20:17:14.572414 kubelet[2609]: I0113 20:17:14.572313 2609 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:17:14.605541 kubelet[2609]: E0113 20:17:14.605500 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:14.606425 kubelet[2609]: E0113 20:17:14.606400 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:14.607107 kubelet[2609]: E0113 20:17:14.606875 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:14.633782 kubelet[2609]: I0113 20:17:14.633701 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.633508896 podStartE2EDuration="1.633508896s" podCreationTimestamp="2025-01-13 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:14.633214247 +0000 UTC m=+1.141648813" watchObservedRunningTime="2025-01-13 20:17:14.633508896 +0000 UTC m=+1.141943382" Jan 13 20:17:14.656842 kubelet[2609]: I0113 20:17:14.656607 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.656588892 podStartE2EDuration="1.656588892s" podCreationTimestamp="2025-01-13 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:14.642587282 +0000 UTC m=+1.151021768" watchObservedRunningTime="2025-01-13 20:17:14.656588892 +0000 UTC m=+1.165023378" Jan 13 20:17:15.612557 kubelet[2609]: E0113 20:17:15.612513 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:16.608852 kubelet[2609]: E0113 20:17:16.608812 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:17.416200 kubelet[2609]: E0113 20:17:17.416171 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:18.206513 sudo[1623]: pam_unix(sudo:session): session closed for user root Jan 13 20:17:18.207657 sshd[1622]: Connection closed by 10.0.0.1 port 58736 Jan 13 20:17:18.208134 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:18.211675 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:58736.service: Deactivated successfully. Jan 13 20:17:18.213582 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:17:18.213759 systemd[1]: session-7.scope: Consumed 7.359s CPU time, 190.6M memory peak, 0B memory swap peak. Jan 13 20:17:18.214257 systemd-logind[1426]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:17:18.214983 systemd-logind[1426]: Removed session 7. Jan 13 20:17:21.130653 kubelet[2609]: E0113 20:17:21.130623 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:21.144678 kubelet[2609]: I0113 20:17:21.144623 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=8.144608185 podStartE2EDuration="8.144608185s" podCreationTimestamp="2025-01-13 20:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:14.65687938 +0000 UTC m=+1.165313946" watchObservedRunningTime="2025-01-13 20:17:21.144608185 +0000 UTC m=+7.653042711" Jan 13 20:17:21.620150 kubelet[2609]: E0113 20:17:21.620071 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:24.123196 update_engine[1433]: I20250113 20:17:24.123118 1433 update_attempter.cc:509] Updating boot flags... Jan 13 20:17:24.150233 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2705) Jan 13 20:17:24.175122 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2703) Jan 13 20:17:25.836713 kubelet[2609]: E0113 20:17:25.836619 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:26.626604 kubelet[2609]: E0113 20:17:26.626566 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:27.423463 kubelet[2609]: E0113 20:17:27.423169 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:28.053833 kubelet[2609]: I0113 20:17:28.053800 2609 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:17:28.061883 containerd[1450]: time="2025-01-13T20:17:28.061840608Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:17:28.063277 kubelet[2609]: I0113 20:17:28.063247 2609 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:17:29.162047 kubelet[2609]: I0113 20:17:29.161973 2609 topology_manager.go:215] "Topology Admit Handler" podUID="0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a" podNamespace="kube-system" podName="kube-proxy-gzlk8" Jan 13 20:17:29.170704 systemd[1]: Created slice kubepods-besteffort-pod0ab6ee0b_bb37_49e7_9ddb_8ea91e710f8a.slice - libcontainer container kubepods-besteffort-pod0ab6ee0b_bb37_49e7_9ddb_8ea91e710f8a.slice. Jan 13 20:17:29.220133 kubelet[2609]: I0113 20:17:29.220091 2609 topology_manager.go:215] "Topology Admit Handler" podUID="5a4ff908-0c99-4306-a9e0-18b732bbf193" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-blmq2" Jan 13 20:17:29.228129 systemd[1]: Created slice kubepods-besteffort-pod5a4ff908_0c99_4306_a9e0_18b732bbf193.slice - libcontainer container kubepods-besteffort-pod5a4ff908_0c99_4306_a9e0_18b732bbf193.slice. Jan 13 20:17:29.268893 kubelet[2609]: I0113 20:17:29.268844 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a-lib-modules\") pod \"kube-proxy-gzlk8\" (UID: \"0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a\") " pod="kube-system/kube-proxy-gzlk8" Jan 13 20:17:29.268893 kubelet[2609]: I0113 20:17:29.268895 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtpz\" (UniqueName: \"kubernetes.io/projected/0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a-kube-api-access-6gtpz\") pod \"kube-proxy-gzlk8\" (UID: \"0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a\") " pod="kube-system/kube-proxy-gzlk8" Jan 13 20:17:29.269073 kubelet[2609]: I0113 20:17:29.268915 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a-kube-proxy\") pod \"kube-proxy-gzlk8\" (UID: \"0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a\") " pod="kube-system/kube-proxy-gzlk8" Jan 13 20:17:29.269073 kubelet[2609]: I0113 20:17:29.268931 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a-xtables-lock\") pod \"kube-proxy-gzlk8\" (UID: \"0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a\") " pod="kube-system/kube-proxy-gzlk8" Jan 13 20:17:29.370060 kubelet[2609]: I0113 20:17:29.370000 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a4ff908-0c99-4306-a9e0-18b732bbf193-var-lib-calico\") pod \"tigera-operator-7bc55997bb-blmq2\" (UID: \"5a4ff908-0c99-4306-a9e0-18b732bbf193\") " pod="tigera-operator/tigera-operator-7bc55997bb-blmq2" Jan 13 20:17:29.370208 kubelet[2609]: I0113 20:17:29.370106 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54nrx\" (UniqueName: \"kubernetes.io/projected/5a4ff908-0c99-4306-a9e0-18b732bbf193-kube-api-access-54nrx\") pod \"tigera-operator-7bc55997bb-blmq2\" (UID: \"5a4ff908-0c99-4306-a9e0-18b732bbf193\") " pod="tigera-operator/tigera-operator-7bc55997bb-blmq2" Jan 13 20:17:29.482533 kubelet[2609]: E0113 20:17:29.481266 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:29.484766 containerd[1450]: time="2025-01-13T20:17:29.484647374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gzlk8,Uid:0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a,Namespace:kube-system,Attempt:0,}" Jan 13 20:17:29.504073 containerd[1450]: time="2025-01-13T20:17:29.503864031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:29.504073 containerd[1450]: time="2025-01-13T20:17:29.503911512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:29.504073 containerd[1450]: time="2025-01-13T20:17:29.503922072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:29.504073 containerd[1450]: time="2025-01-13T20:17:29.503987633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:29.521237 systemd[1]: Started cri-containerd-86d40bf9f59e33e900fe0ef59f5cd4003d7c4b30403de83f49bcab220a5a5e68.scope - libcontainer container 86d40bf9f59e33e900fe0ef59f5cd4003d7c4b30403de83f49bcab220a5a5e68. Jan 13 20:17:29.532066 containerd[1450]: time="2025-01-13T20:17:29.531802365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-blmq2,Uid:5a4ff908-0c99-4306-a9e0-18b732bbf193,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:17:29.539009 containerd[1450]: time="2025-01-13T20:17:29.538932620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gzlk8,Uid:0ab6ee0b-bb37-49e7-9ddb-8ea91e710f8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"86d40bf9f59e33e900fe0ef59f5cd4003d7c4b30403de83f49bcab220a5a5e68\"" Jan 13 20:17:29.541358 kubelet[2609]: E0113 20:17:29.541331 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:29.544408 containerd[1450]: time="2025-01-13T20:17:29.544117090Z" level=info msg="CreateContainer within sandbox \"86d40bf9f59e33e900fe0ef59f5cd4003d7c4b30403de83f49bcab220a5a5e68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:17:29.556414 containerd[1450]: time="2025-01-13T20:17:29.556286972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:29.556414 containerd[1450]: time="2025-01-13T20:17:29.556376134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:29.556414 containerd[1450]: time="2025-01-13T20:17:29.556394454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:29.556599 containerd[1450]: time="2025-01-13T20:17:29.556482535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:29.570516 containerd[1450]: time="2025-01-13T20:17:29.570476042Z" level=info msg="CreateContainer within sandbox \"86d40bf9f59e33e900fe0ef59f5cd4003d7c4b30403de83f49bcab220a5a5e68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"61e78d21c987d3401513130455847d966a1c236ec47886e90ab02ade088adcad\"" Jan 13 20:17:29.573392 containerd[1450]: time="2025-01-13T20:17:29.573278240Z" level=info msg="StartContainer for \"61e78d21c987d3401513130455847d966a1c236ec47886e90ab02ade088adcad\"" Jan 13 20:17:29.574254 systemd[1]: Started cri-containerd-baa0192ff577595b51d795654bc5f8fda5920f71c9fa9ab796dd8019818d409e.scope - libcontainer container baa0192ff577595b51d795654bc5f8fda5920f71c9fa9ab796dd8019818d409e. Jan 13 20:17:29.604320 systemd[1]: Started cri-containerd-61e78d21c987d3401513130455847d966a1c236ec47886e90ab02ade088adcad.scope - libcontainer container 61e78d21c987d3401513130455847d966a1c236ec47886e90ab02ade088adcad. Jan 13 20:17:29.613725 containerd[1450]: time="2025-01-13T20:17:29.613646340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-blmq2,Uid:5a4ff908-0c99-4306-a9e0-18b732bbf193,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"baa0192ff577595b51d795654bc5f8fda5920f71c9fa9ab796dd8019818d409e\"" Jan 13 20:17:29.616570 containerd[1450]: time="2025-01-13T20:17:29.616540979Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:17:29.647334 containerd[1450]: time="2025-01-13T20:17:29.647294630Z" level=info msg="StartContainer for \"61e78d21c987d3401513130455847d966a1c236ec47886e90ab02ade088adcad\" returns successfully" Jan 13 20:17:30.655929 kubelet[2609]: E0113 20:17:30.655885 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:30.669798 kubelet[2609]: I0113 20:17:30.669730 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gzlk8" podStartSLOduration=1.6697147430000001 podStartE2EDuration="1.669714743s" podCreationTimestamp="2025-01-13 20:17:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:17:30.668021682 +0000 UTC m=+17.176456208" watchObservedRunningTime="2025-01-13 20:17:30.669714743 +0000 UTC m=+17.178149229" Jan 13 20:17:31.653266 kubelet[2609]: E0113 20:17:31.653235 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:41.845728 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:48384.service - OpenSSH per-connection server daemon (10.0.0.1:48384). Jan 13 20:17:41.893344 sshd[2955]: Accepted publickey for core from 10.0.0.1 port 48384 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:41.894776 sshd-session[2955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:41.898619 systemd-logind[1426]: New session 8 of user core. Jan 13 20:17:41.913257 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:17:42.041368 sshd[2957]: Connection closed by 10.0.0.1 port 48384 Jan 13 20:17:42.041733 sshd-session[2955]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:42.044195 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:17:42.044740 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:48384.service: Deactivated successfully. Jan 13 20:17:42.050537 systemd-logind[1426]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:17:42.051471 systemd-logind[1426]: Removed session 8. Jan 13 20:17:44.949182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3719665404.mount: Deactivated successfully. Jan 13 20:17:45.457912 containerd[1450]: time="2025-01-13T20:17:45.457860690Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:45.458415 containerd[1450]: time="2025-01-13T20:17:45.458360973Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124776" Jan 13 20:17:45.463170 containerd[1450]: time="2025-01-13T20:17:45.463119047Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:45.466092 containerd[1450]: time="2025-01-13T20:17:45.466048948Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:45.466950 containerd[1450]: time="2025-01-13T20:17:45.466913834Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 15.850339335s" Jan 13 20:17:45.466950 containerd[1450]: time="2025-01-13T20:17:45.466944034Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 13 20:17:45.470626 containerd[1450]: time="2025-01-13T20:17:45.470573100Z" level=info msg="CreateContainer within sandbox \"baa0192ff577595b51d795654bc5f8fda5920f71c9fa9ab796dd8019818d409e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:17:45.550517 containerd[1450]: time="2025-01-13T20:17:45.550464068Z" level=info msg="CreateContainer within sandbox \"baa0192ff577595b51d795654bc5f8fda5920f71c9fa9ab796dd8019818d409e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5a0d1057858aab27a3371376010879ca6de89dd33e02013cb64d789a97596977\"" Jan 13 20:17:45.551009 containerd[1450]: time="2025-01-13T20:17:45.550964552Z" level=info msg="StartContainer for \"5a0d1057858aab27a3371376010879ca6de89dd33e02013cb64d789a97596977\"" Jan 13 20:17:45.579260 systemd[1]: Started cri-containerd-5a0d1057858aab27a3371376010879ca6de89dd33e02013cb64d789a97596977.scope - libcontainer container 5a0d1057858aab27a3371376010879ca6de89dd33e02013cb64d789a97596977. Jan 13 20:17:45.615240 containerd[1450]: time="2025-01-13T20:17:45.615172048Z" level=info msg="StartContainer for \"5a0d1057858aab27a3371376010879ca6de89dd33e02013cb64d789a97596977\" returns successfully" Jan 13 20:17:45.698038 kubelet[2609]: I0113 20:17:45.697833 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-blmq2" podStartSLOduration=0.843341141 podStartE2EDuration="16.697799636s" podCreationTimestamp="2025-01-13 20:17:29 +0000 UTC" firstStartedPulling="2025-01-13 20:17:29.614886116 +0000 UTC m=+16.123320602" lastFinishedPulling="2025-01-13 20:17:45.469344571 +0000 UTC m=+31.977779097" observedRunningTime="2025-01-13 20:17:45.697799516 +0000 UTC m=+32.206234002" watchObservedRunningTime="2025-01-13 20:17:45.697799636 +0000 UTC m=+32.206234122" Jan 13 20:17:47.050641 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:57148.service - OpenSSH per-connection server daemon (10.0.0.1:57148). Jan 13 20:17:47.107215 sshd[3018]: Accepted publickey for core from 10.0.0.1 port 57148 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:47.108497 sshd-session[3018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:47.112511 systemd-logind[1426]: New session 9 of user core. Jan 13 20:17:47.123287 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:17:47.239144 sshd[3021]: Connection closed by 10.0.0.1 port 57148 Jan 13 20:17:47.239547 sshd-session[3018]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:47.243023 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:57148.service: Deactivated successfully. Jan 13 20:17:47.244803 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:17:47.246722 systemd-logind[1426]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:17:47.247793 systemd-logind[1426]: Removed session 9. Jan 13 20:17:49.341579 kubelet[2609]: I0113 20:17:49.339905 2609 topology_manager.go:215] "Topology Admit Handler" podUID="9349fc67-e92e-4f66-aac4-0abd2af3c6fb" podNamespace="calico-system" podName="calico-node-b9g79" Jan 13 20:17:49.355655 systemd[1]: Created slice kubepods-besteffort-pod9349fc67_e92e_4f66_aac4_0abd2af3c6fb.slice - libcontainer container kubepods-besteffort-pod9349fc67_e92e_4f66_aac4_0abd2af3c6fb.slice. Jan 13 20:17:49.452518 kubelet[2609]: I0113 20:17:49.452255 2609 topology_manager.go:215] "Topology Admit Handler" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" podNamespace="calico-system" podName="csi-node-driver-dp8kg" Jan 13 20:17:49.453027 kubelet[2609]: E0113 20:17:49.453004 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp8kg" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" Jan 13 20:17:49.502091 kubelet[2609]: I0113 20:17:49.502033 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-node-certs\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502223 kubelet[2609]: I0113 20:17:49.502104 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-cni-log-dir\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502223 kubelet[2609]: I0113 20:17:49.502123 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-flexvol-driver-host\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502223 kubelet[2609]: I0113 20:17:49.502174 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-var-run-calico\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502223 kubelet[2609]: I0113 20:17:49.502192 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg9kg\" (UniqueName: \"kubernetes.io/projected/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-kube-api-access-gg9kg\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502223 kubelet[2609]: I0113 20:17:49.502207 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-cni-bin-dir\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502348 kubelet[2609]: I0113 20:17:49.502230 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-cni-net-dir\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502348 kubelet[2609]: I0113 20:17:49.502247 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-tigera-ca-bundle\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502348 kubelet[2609]: I0113 20:17:49.502261 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-var-lib-calico\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502348 kubelet[2609]: I0113 20:17:49.502276 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-lib-modules\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502348 kubelet[2609]: I0113 20:17:49.502300 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-xtables-lock\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.502453 kubelet[2609]: I0113 20:17:49.502321 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9349fc67-e92e-4f66-aac4-0abd2af3c6fb-policysync\") pod \"calico-node-b9g79\" (UID: \"9349fc67-e92e-4f66-aac4-0abd2af3c6fb\") " pod="calico-system/calico-node-b9g79" Jan 13 20:17:49.603198 kubelet[2609]: I0113 20:17:49.602768 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85-registration-dir\") pod \"csi-node-driver-dp8kg\" (UID: \"fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85\") " pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:49.603198 kubelet[2609]: I0113 20:17:49.602824 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85-kubelet-dir\") pod \"csi-node-driver-dp8kg\" (UID: \"fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85\") " pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:49.603198 kubelet[2609]: I0113 20:17:49.602851 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85-varrun\") pod \"csi-node-driver-dp8kg\" (UID: \"fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85\") " pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:49.603198 kubelet[2609]: I0113 20:17:49.602923 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqkwr\" (UniqueName: \"kubernetes.io/projected/fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85-kube-api-access-vqkwr\") pod \"csi-node-driver-dp8kg\" (UID: \"fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85\") " pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:49.603198 kubelet[2609]: I0113 20:17:49.602947 2609 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85-socket-dir\") pod \"csi-node-driver-dp8kg\" (UID: \"fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85\") " pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:49.613317 kubelet[2609]: E0113 20:17:49.613282 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.613535 kubelet[2609]: W0113 20:17:49.613464 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.613535 kubelet[2609]: E0113 20:17:49.613491 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.622444 kubelet[2609]: E0113 20:17:49.622409 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.622444 kubelet[2609]: W0113 20:17:49.622437 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.622580 kubelet[2609]: E0113 20:17:49.622460 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.660740 kubelet[2609]: E0113 20:17:49.660706 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:49.661371 containerd[1450]: time="2025-01-13T20:17:49.661337334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b9g79,Uid:9349fc67-e92e-4f66-aac4-0abd2af3c6fb,Namespace:calico-system,Attempt:0,}" Jan 13 20:17:49.683481 containerd[1450]: time="2025-01-13T20:17:49.682916711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:17:49.683481 containerd[1450]: time="2025-01-13T20:17:49.683347554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:17:49.683481 containerd[1450]: time="2025-01-13T20:17:49.683361874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:49.683651 containerd[1450]: time="2025-01-13T20:17:49.683529595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:17:49.703796 kubelet[2609]: E0113 20:17:49.703767 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.703796 kubelet[2609]: W0113 20:17:49.703791 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.704261 kubelet[2609]: E0113 20:17:49.703812 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.704261 kubelet[2609]: E0113 20:17:49.704036 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.704261 kubelet[2609]: W0113 20:17:49.704046 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.704261 kubelet[2609]: E0113 20:17:49.704061 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.704605 kubelet[2609]: E0113 20:17:49.704581 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.704707 kubelet[2609]: W0113 20:17:49.704658 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.704909 kubelet[2609]: E0113 20:17:49.704872 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.705145 kubelet[2609]: E0113 20:17:49.704987 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.705145 kubelet[2609]: W0113 20:17:49.704997 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.705292 kubelet[2609]: E0113 20:17:49.705235 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.705771 systemd[1]: Started cri-containerd-3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29.scope - libcontainer container 3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29. Jan 13 20:17:49.706791 kubelet[2609]: E0113 20:17:49.706218 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.706791 kubelet[2609]: W0113 20:17:49.706235 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.706791 kubelet[2609]: E0113 20:17:49.706274 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.707876 kubelet[2609]: E0113 20:17:49.707807 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.707876 kubelet[2609]: W0113 20:17:49.707823 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.708011 kubelet[2609]: E0113 20:17:49.707914 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.708434 kubelet[2609]: E0113 20:17:49.708275 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.708434 kubelet[2609]: W0113 20:17:49.708288 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.708434 kubelet[2609]: E0113 20:17:49.708408 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.709206 kubelet[2609]: E0113 20:17:49.709123 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.709206 kubelet[2609]: W0113 20:17:49.709138 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.709388 kubelet[2609]: E0113 20:17:49.709312 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.709639 kubelet[2609]: E0113 20:17:49.709508 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.709639 kubelet[2609]: W0113 20:17:49.709519 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.709639 kubelet[2609]: E0113 20:17:49.709616 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.710334 kubelet[2609]: E0113 20:17:49.710237 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.710334 kubelet[2609]: W0113 20:17:49.710253 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.710515 kubelet[2609]: E0113 20:17:49.710443 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.710812 kubelet[2609]: E0113 20:17:49.710704 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.710812 kubelet[2609]: W0113 20:17:49.710718 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.711205 kubelet[2609]: E0113 20:17:49.711003 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.711205 kubelet[2609]: E0113 20:17:49.711101 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.711205 kubelet[2609]: W0113 20:17:49.711111 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.711587 kubelet[2609]: E0113 20:17:49.711477 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.713275 kubelet[2609]: E0113 20:17:49.713144 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.713275 kubelet[2609]: W0113 20:17:49.713173 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.713409 kubelet[2609]: E0113 20:17:49.713371 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.713409 kubelet[2609]: W0113 20:17:49.713380 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.714103 kubelet[2609]: E0113 20:17:49.713477 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.714103 kubelet[2609]: E0113 20:17:49.713507 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.714103 kubelet[2609]: E0113 20:17:49.713516 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.714103 kubelet[2609]: W0113 20:17:49.713527 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.714103 kubelet[2609]: E0113 20:17:49.713607 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.714301 kubelet[2609]: E0113 20:17:49.714267 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.714301 kubelet[2609]: W0113 20:17:49.714289 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.714393 kubelet[2609]: E0113 20:17:49.714366 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.714573 kubelet[2609]: E0113 20:17:49.714548 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.714573 kubelet[2609]: W0113 20:17:49.714565 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.714637 kubelet[2609]: E0113 20:17:49.714603 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.714734 kubelet[2609]: E0113 20:17:49.714713 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.714734 kubelet[2609]: W0113 20:17:49.714726 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.714784 kubelet[2609]: E0113 20:17:49.714763 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.714858 kubelet[2609]: E0113 20:17:49.714848 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.714883 kubelet[2609]: W0113 20:17:49.714858 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.714966 kubelet[2609]: E0113 20:17:49.714951 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.714987 kubelet[2609]: E0113 20:17:49.714969 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.714987 kubelet[2609]: W0113 20:17:49.714976 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.715026 kubelet[2609]: E0113 20:17:49.714996 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.715128 kubelet[2609]: E0113 20:17:49.715117 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.715164 kubelet[2609]: W0113 20:17:49.715128 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.715164 kubelet[2609]: E0113 20:17:49.715144 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.715353 kubelet[2609]: E0113 20:17:49.715336 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.715353 kubelet[2609]: W0113 20:17:49.715350 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.715409 kubelet[2609]: E0113 20:17:49.715365 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.715540 kubelet[2609]: E0113 20:17:49.715516 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.715540 kubelet[2609]: W0113 20:17:49.715530 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.715594 kubelet[2609]: E0113 20:17:49.715544 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.715704 kubelet[2609]: E0113 20:17:49.715693 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.715704 kubelet[2609]: W0113 20:17:49.715703 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.715829 kubelet[2609]: E0113 20:17:49.715716 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.716902 kubelet[2609]: E0113 20:17:49.716831 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.716902 kubelet[2609]: W0113 20:17:49.716857 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.716902 kubelet[2609]: E0113 20:17:49.716872 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.730416 kubelet[2609]: E0113 20:17:49.730246 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:17:49.730416 kubelet[2609]: W0113 20:17:49.730266 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:17:49.730416 kubelet[2609]: E0113 20:17:49.730284 2609 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:17:49.738813 containerd[1450]: time="2025-01-13T20:17:49.738715104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b9g79,Uid:9349fc67-e92e-4f66-aac4-0abd2af3c6fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29\"" Jan 13 20:17:49.739344 kubelet[2609]: E0113 20:17:49.739322 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:49.740375 containerd[1450]: time="2025-01-13T20:17:49.740351074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:17:50.587546 kubelet[2609]: E0113 20:17:50.587493 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp8kg" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" Jan 13 20:17:50.887942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654866299.mount: Deactivated successfully. Jan 13 20:17:51.154835 containerd[1450]: time="2025-01-13T20:17:51.154795600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:51.156173 containerd[1450]: time="2025-01-13T20:17:51.156120168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 13 20:17:51.156828 containerd[1450]: time="2025-01-13T20:17:51.156800612Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:51.159632 containerd[1450]: time="2025-01-13T20:17:51.159562428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:51.160331 containerd[1450]: time="2025-01-13T20:17:51.160148592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.419683037s" Jan 13 20:17:51.160331 containerd[1450]: time="2025-01-13T20:17:51.160187232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 20:17:51.163502 containerd[1450]: time="2025-01-13T20:17:51.163387971Z" level=info msg="CreateContainer within sandbox \"3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:17:51.176961 containerd[1450]: time="2025-01-13T20:17:51.176911892Z" level=info msg="CreateContainer within sandbox \"3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"78c9fa10a1b2c1490eaacf746001f02525f660aaa707c1e11382f81c0d2745c4\"" Jan 13 20:17:51.177606 containerd[1450]: time="2025-01-13T20:17:51.177580256Z" level=info msg="StartContainer for \"78c9fa10a1b2c1490eaacf746001f02525f660aaa707c1e11382f81c0d2745c4\"" Jan 13 20:17:51.206250 systemd[1]: Started cri-containerd-78c9fa10a1b2c1490eaacf746001f02525f660aaa707c1e11382f81c0d2745c4.scope - libcontainer container 78c9fa10a1b2c1490eaacf746001f02525f660aaa707c1e11382f81c0d2745c4. Jan 13 20:17:51.240258 containerd[1450]: time="2025-01-13T20:17:51.240116232Z" level=info msg="StartContainer for \"78c9fa10a1b2c1490eaacf746001f02525f660aaa707c1e11382f81c0d2745c4\" returns successfully" Jan 13 20:17:51.283514 systemd[1]: cri-containerd-78c9fa10a1b2c1490eaacf746001f02525f660aaa707c1e11382f81c0d2745c4.scope: Deactivated successfully. Jan 13 20:17:51.343389 containerd[1450]: time="2025-01-13T20:17:51.332715187Z" level=info msg="shim disconnected" id=78c9fa10a1b2c1490eaacf746001f02525f660aaa707c1e11382f81c0d2745c4 namespace=k8s.io Jan 13 20:17:51.343389 containerd[1450]: time="2025-01-13T20:17:51.343387731Z" level=warning msg="cleaning up after shim disconnected" id=78c9fa10a1b2c1490eaacf746001f02525f660aaa707c1e11382f81c0d2745c4 namespace=k8s.io Jan 13 20:17:51.343581 containerd[1450]: time="2025-01-13T20:17:51.343401491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:17:51.698966 kubelet[2609]: E0113 20:17:51.698919 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:51.699727 containerd[1450]: time="2025-01-13T20:17:51.699618348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:17:52.254748 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:57150.service - OpenSSH per-connection server daemon (10.0.0.1:57150). Jan 13 20:17:52.297620 sshd[3191]: Accepted publickey for core from 10.0.0.1 port 57150 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:52.298971 sshd-session[3191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:52.303879 systemd-logind[1426]: New session 10 of user core. Jan 13 20:17:52.310250 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:17:52.423997 sshd[3193]: Connection closed by 10.0.0.1 port 57150 Jan 13 20:17:52.423460 sshd-session[3191]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:52.427081 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:57150.service: Deactivated successfully. Jan 13 20:17:52.429333 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:17:52.430123 systemd-logind[1426]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:17:52.430984 systemd-logind[1426]: Removed session 10. Jan 13 20:17:52.587727 kubelet[2609]: E0113 20:17:52.587611 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp8kg" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" Jan 13 20:17:54.587412 kubelet[2609]: E0113 20:17:54.587307 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dp8kg" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" Jan 13 20:17:55.150295 containerd[1450]: time="2025-01-13T20:17:55.150246920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:55.151296 containerd[1450]: time="2025-01-13T20:17:55.151221445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 20:17:55.151979 containerd[1450]: time="2025-01-13T20:17:55.151950569Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:55.155733 containerd[1450]: time="2025-01-13T20:17:55.154611304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:17:55.155733 containerd[1450]: time="2025-01-13T20:17:55.155354108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.45570396s" Jan 13 20:17:55.155733 containerd[1450]: time="2025-01-13T20:17:55.155379748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 20:17:55.157718 containerd[1450]: time="2025-01-13T20:17:55.157685601Z" level=info msg="CreateContainer within sandbox \"3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:17:55.210662 containerd[1450]: time="2025-01-13T20:17:55.210345768Z" level=info msg="CreateContainer within sandbox \"3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660\"" Jan 13 20:17:55.211211 containerd[1450]: time="2025-01-13T20:17:55.211174893Z" level=info msg="StartContainer for \"4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660\"" Jan 13 20:17:55.245357 systemd[1]: Started cri-containerd-4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660.scope - libcontainer container 4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660. Jan 13 20:17:55.497274 containerd[1450]: time="2025-01-13T20:17:55.497230256Z" level=info msg="StartContainer for \"4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660\" returns successfully" Jan 13 20:17:55.708644 kubelet[2609]: E0113 20:17:55.708596 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:55.970802 systemd[1]: cri-containerd-4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660.scope: Deactivated successfully. Jan 13 20:17:55.987906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660-rootfs.mount: Deactivated successfully. Jan 13 20:17:55.993391 kubelet[2609]: I0113 20:17:55.992883 2609 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:17:55.994738 containerd[1450]: time="2025-01-13T20:17:55.994686613Z" level=info msg="shim disconnected" id=4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660 namespace=k8s.io Jan 13 20:17:55.994738 containerd[1450]: time="2025-01-13T20:17:55.994737694Z" level=warning msg="cleaning up after shim disconnected" id=4e4ef4cd4e42626bf6a3ecb61ac0833eab496c39e8dd36878dc3cbd99949f660 namespace=k8s.io Jan 13 20:17:55.994854 containerd[1450]: time="2025-01-13T20:17:55.994748214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:17:56.591757 systemd[1]: Created slice kubepods-besteffort-podfd0ab3f1_0b07_4a49_9d5d_91eb72e39d85.slice - libcontainer container kubepods-besteffort-podfd0ab3f1_0b07_4a49_9d5d_91eb72e39d85.slice. Jan 13 20:17:56.593696 containerd[1450]: time="2025-01-13T20:17:56.593656258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:0,}" Jan 13 20:17:56.712371 kubelet[2609]: E0113 20:17:56.712326 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:17:56.714809 containerd[1450]: time="2025-01-13T20:17:56.714768346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:17:56.808366 containerd[1450]: time="2025-01-13T20:17:56.808288246Z" level=error msg="Failed to destroy network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:56.808725 containerd[1450]: time="2025-01-13T20:17:56.808698928Z" level=error msg="encountered an error cleaning up failed sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:56.808786 containerd[1450]: time="2025-01-13T20:17:56.808767009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:56.810118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6-shm.mount: Deactivated successfully. Jan 13 20:17:56.812627 kubelet[2609]: E0113 20:17:56.812541 2609 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:56.812732 kubelet[2609]: E0113 20:17:56.812640 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:56.812732 kubelet[2609]: E0113 20:17:56.812663 2609 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:56.812774 kubelet[2609]: E0113 20:17:56.812725 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dp8kg_calico-system(fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dp8kg_calico-system(fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dp8kg" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" Jan 13 20:17:57.434976 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:50090.service - OpenSSH per-connection server daemon (10.0.0.1:50090). Jan 13 20:17:57.485785 sshd[3316]: Accepted publickey for core from 10.0.0.1 port 50090 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:57.487069 sshd-session[3316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:57.490700 systemd-logind[1426]: New session 11 of user core. Jan 13 20:17:57.497230 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:17:57.609801 sshd[3318]: Connection closed by 10.0.0.1 port 50090 Jan 13 20:17:57.610963 sshd-session[3316]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:57.619573 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:50090.service: Deactivated successfully. Jan 13 20:17:57.621066 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:17:57.622393 systemd-logind[1426]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:17:57.626402 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:50092.service - OpenSSH per-connection server daemon (10.0.0.1:50092). Jan 13 20:17:57.628996 systemd-logind[1426]: Removed session 11. Jan 13 20:17:57.678050 sshd[3331]: Accepted publickey for core from 10.0.0.1 port 50092 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:57.679341 sshd-session[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:57.685423 systemd-logind[1426]: New session 12 of user core. Jan 13 20:17:57.691288 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:17:57.717897 kubelet[2609]: I0113 20:17:57.717861 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6" Jan 13 20:17:57.721063 containerd[1450]: time="2025-01-13T20:17:57.720848651Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\"" Jan 13 20:17:57.730142 containerd[1450]: time="2025-01-13T20:17:57.730098420Z" level=info msg="Ensure that sandbox 3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6 in task-service has been cleanup successfully" Jan 13 20:17:57.732448 systemd[1]: run-netns-cni\x2dc6d1383a\x2d3ff2\x2de16e\x2ddf23\x2d740bb4ed312e.mount: Deactivated successfully. Jan 13 20:17:57.736100 containerd[1450]: time="2025-01-13T20:17:57.736051811Z" level=info msg="TearDown network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" successfully" Jan 13 20:17:57.736203 containerd[1450]: time="2025-01-13T20:17:57.736091531Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" returns successfully" Jan 13 20:17:57.736834 containerd[1450]: time="2025-01-13T20:17:57.736799735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:1,}" Jan 13 20:17:57.905480 containerd[1450]: time="2025-01-13T20:17:57.905427259Z" level=error msg="Failed to destroy network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:57.906210 containerd[1450]: time="2025-01-13T20:17:57.906161063Z" level=error msg="encountered an error cleaning up failed sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:57.906490 containerd[1450]: time="2025-01-13T20:17:57.906462825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:57.907382 kubelet[2609]: E0113 20:17:57.907321 2609 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:57.907458 kubelet[2609]: E0113 20:17:57.907394 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:57.907458 kubelet[2609]: E0113 20:17:57.907414 2609 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:57.907518 kubelet[2609]: E0113 20:17:57.907451 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dp8kg_calico-system(fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dp8kg_calico-system(fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dp8kg" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" Jan 13 20:17:57.909749 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630-shm.mount: Deactivated successfully. Jan 13 20:17:57.950766 sshd[3333]: Connection closed by 10.0.0.1 port 50092 Jan 13 20:17:57.951508 sshd-session[3331]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:57.964948 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:50092.service: Deactivated successfully. Jan 13 20:17:57.968522 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:17:57.971943 systemd-logind[1426]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:17:57.983937 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:50104.service - OpenSSH per-connection server daemon (10.0.0.1:50104). Jan 13 20:17:57.985642 systemd-logind[1426]: Removed session 12. Jan 13 20:17:58.024285 sshd[3382]: Accepted publickey for core from 10.0.0.1 port 50104 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:17:58.026733 sshd-session[3382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:17:58.033753 systemd-logind[1426]: New session 13 of user core. Jan 13 20:17:58.039710 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:17:58.363308 sshd[3384]: Connection closed by 10.0.0.1 port 50104 Jan 13 20:17:58.363785 sshd-session[3382]: pam_unix(sshd:session): session closed for user core Jan 13 20:17:58.369538 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:50104.service: Deactivated successfully. Jan 13 20:17:58.371896 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:17:58.373823 systemd-logind[1426]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:17:58.375150 systemd-logind[1426]: Removed session 13. Jan 13 20:17:58.721273 kubelet[2609]: I0113 20:17:58.721223 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630" Jan 13 20:17:58.722911 containerd[1450]: time="2025-01-13T20:17:58.722803513Z" level=info msg="StopPodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\"" Jan 13 20:17:58.723614 containerd[1450]: time="2025-01-13T20:17:58.723293955Z" level=info msg="Ensure that sandbox bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630 in task-service has been cleanup successfully" Jan 13 20:17:58.723712 containerd[1450]: time="2025-01-13T20:17:58.723687957Z" level=info msg="TearDown network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" successfully" Jan 13 20:17:58.723712 containerd[1450]: time="2025-01-13T20:17:58.723709077Z" level=info msg="StopPodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" returns successfully" Jan 13 20:17:58.724420 containerd[1450]: time="2025-01-13T20:17:58.724395401Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\"" Jan 13 20:17:58.724512 containerd[1450]: time="2025-01-13T20:17:58.724488801Z" level=info msg="TearDown network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" successfully" Jan 13 20:17:58.724512 containerd[1450]: time="2025-01-13T20:17:58.724502041Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" returns successfully" Jan 13 20:17:58.725289 containerd[1450]: time="2025-01-13T20:17:58.725261645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:2,}" Jan 13 20:17:58.725910 systemd[1]: run-netns-cni\x2dc2252122\x2d7f24\x2d1b97\x2d03a0\x2d29b942900cbd.mount: Deactivated successfully. Jan 13 20:17:58.809238 containerd[1450]: time="2025-01-13T20:17:58.808745715Z" level=error msg="Failed to destroy network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:58.809238 containerd[1450]: time="2025-01-13T20:17:58.809071116Z" level=error msg="encountered an error cleaning up failed sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:58.809238 containerd[1450]: time="2025-01-13T20:17:58.809144237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:58.809989 kubelet[2609]: E0113 20:17:58.809535 2609 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:17:58.809989 kubelet[2609]: E0113 20:17:58.809593 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:58.809989 kubelet[2609]: E0113 20:17:58.809621 2609 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:17:58.811234 kubelet[2609]: E0113 20:17:58.809657 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dp8kg_calico-system(fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dp8kg_calico-system(fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dp8kg" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" Jan 13 20:17:58.810474 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30-shm.mount: Deactivated successfully. Jan 13 20:17:59.730644 kubelet[2609]: I0113 20:17:59.730612 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30" Jan 13 20:17:59.731706 containerd[1450]: time="2025-01-13T20:17:59.731671673Z" level=info msg="StopPodSandbox for \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\"" Jan 13 20:17:59.731940 containerd[1450]: time="2025-01-13T20:17:59.731849354Z" level=info msg="Ensure that sandbox 7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30 in task-service has been cleanup successfully" Jan 13 20:17:59.733021 containerd[1450]: time="2025-01-13T20:17:59.732991319Z" level=info msg="TearDown network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\" successfully" Jan 13 20:17:59.733021 containerd[1450]: time="2025-01-13T20:17:59.733013720Z" level=info msg="StopPodSandbox for \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\" returns successfully" Jan 13 20:17:59.733805 systemd[1]: run-netns-cni\x2d97c44355\x2dcd8f\x2dc9a0\x2df744\x2db418db2c7b5f.mount: Deactivated successfully. Jan 13 20:17:59.734274 containerd[1450]: time="2025-01-13T20:17:59.734241446Z" level=info msg="StopPodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\"" Jan 13 20:17:59.734839 containerd[1450]: time="2025-01-13T20:17:59.734815849Z" level=info msg="TearDown network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" successfully" Jan 13 20:17:59.734933 containerd[1450]: time="2025-01-13T20:17:59.734838969Z" level=info msg="StopPodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" returns successfully" Jan 13 20:17:59.735720 containerd[1450]: time="2025-01-13T20:17:59.735572333Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\"" Jan 13 20:17:59.735720 containerd[1450]: time="2025-01-13T20:17:59.735655333Z" level=info msg="TearDown network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" successfully" Jan 13 20:17:59.735720 containerd[1450]: time="2025-01-13T20:17:59.735665973Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" returns successfully" Jan 13 20:17:59.736539 containerd[1450]: time="2025-01-13T20:17:59.736501977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:3,}" Jan 13 20:18:00.127837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976571148.mount: Deactivated successfully. Jan 13 20:18:00.134928 containerd[1450]: time="2025-01-13T20:18:00.134776817Z" level=error msg="Failed to destroy network for sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:18:00.135302 containerd[1450]: time="2025-01-13T20:18:00.135270339Z" level=error msg="encountered an error cleaning up failed sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:18:00.135519 containerd[1450]: time="2025-01-13T20:18:00.135409780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:18:00.135682 kubelet[2609]: E0113 20:18:00.135638 2609 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:18:00.135782 kubelet[2609]: E0113 20:18:00.135693 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:18:00.135782 kubelet[2609]: E0113 20:18:00.135714 2609 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dp8kg" Jan 13 20:18:00.135782 kubelet[2609]: E0113 20:18:00.135755 2609 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dp8kg_calico-system(fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dp8kg_calico-system(fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dp8kg" podUID="fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85" Jan 13 20:18:00.136811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba-shm.mount: Deactivated successfully. Jan 13 20:18:00.157171 containerd[1450]: time="2025-01-13T20:18:00.157117127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:00.159244 containerd[1450]: time="2025-01-13T20:18:00.159174978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 20:18:00.160204 containerd[1450]: time="2025-01-13T20:18:00.160165183Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:00.161923 containerd[1450]: time="2025-01-13T20:18:00.161852871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:00.167237 containerd[1450]: time="2025-01-13T20:18:00.167204377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.452395431s" Jan 13 20:18:00.167322 containerd[1450]: time="2025-01-13T20:18:00.167241178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 20:18:00.181252 containerd[1450]: time="2025-01-13T20:18:00.181131567Z" level=info msg="CreateContainer within sandbox \"3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:18:00.192305 containerd[1450]: time="2025-01-13T20:18:00.192211022Z" level=info msg="CreateContainer within sandbox \"3052f45aa9cdbc0fc24cbb5233ab192c817b1bb5637541260d3dbfb0d7635c29\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ed9fc8f3dbc91ff4118a5573958a15e414717c39abaa81d69476b0baf9a58f34\"" Jan 13 20:18:00.192735 containerd[1450]: time="2025-01-13T20:18:00.192707704Z" level=info msg="StartContainer for \"ed9fc8f3dbc91ff4118a5573958a15e414717c39abaa81d69476b0baf9a58f34\"" Jan 13 20:18:00.244290 systemd[1]: Started cri-containerd-ed9fc8f3dbc91ff4118a5573958a15e414717c39abaa81d69476b0baf9a58f34.scope - libcontainer container ed9fc8f3dbc91ff4118a5573958a15e414717c39abaa81d69476b0baf9a58f34. Jan 13 20:18:00.269709 containerd[1450]: time="2025-01-13T20:18:00.269199124Z" level=info msg="StartContainer for \"ed9fc8f3dbc91ff4118a5573958a15e414717c39abaa81d69476b0baf9a58f34\" returns successfully" Jan 13 20:18:00.465773 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:18:00.465877 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:18:00.736487 kubelet[2609]: E0113 20:18:00.736065 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:00.738784 kubelet[2609]: I0113 20:18:00.738557 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba" Jan 13 20:18:00.740322 containerd[1450]: time="2025-01-13T20:18:00.739415897Z" level=info msg="StopPodSandbox for \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\"" Jan 13 20:18:00.740611 containerd[1450]: time="2025-01-13T20:18:00.740365461Z" level=info msg="Ensure that sandbox 617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba in task-service has been cleanup successfully" Jan 13 20:18:00.741507 containerd[1450]: time="2025-01-13T20:18:00.740689263Z" level=info msg="TearDown network for sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\" successfully" Jan 13 20:18:00.741507 containerd[1450]: time="2025-01-13T20:18:00.740712463Z" level=info msg="StopPodSandbox for \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\" returns successfully" Jan 13 20:18:00.741507 containerd[1450]: time="2025-01-13T20:18:00.741053745Z" level=info msg="StopPodSandbox for \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\"" Jan 13 20:18:00.741507 containerd[1450]: time="2025-01-13T20:18:00.741156745Z" level=info msg="TearDown network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\" successfully" Jan 13 20:18:00.741507 containerd[1450]: time="2025-01-13T20:18:00.741167905Z" level=info msg="StopPodSandbox for \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\" returns successfully" Jan 13 20:18:00.741912 containerd[1450]: time="2025-01-13T20:18:00.741886869Z" level=info msg="StopPodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\"" Jan 13 20:18:00.741989 containerd[1450]: time="2025-01-13T20:18:00.741972949Z" level=info msg="TearDown network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" successfully" Jan 13 20:18:00.742856 containerd[1450]: time="2025-01-13T20:18:00.742827194Z" level=info msg="StopPodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" returns successfully" Jan 13 20:18:00.744053 containerd[1450]: time="2025-01-13T20:18:00.744028199Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\"" Jan 13 20:18:00.744403 containerd[1450]: time="2025-01-13T20:18:00.744380961Z" level=info msg="TearDown network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" successfully" Jan 13 20:18:00.744497 containerd[1450]: time="2025-01-13T20:18:00.744482682Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" returns successfully" Jan 13 20:18:00.746115 containerd[1450]: time="2025-01-13T20:18:00.746067770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:4,}" Jan 13 20:18:00.952672 systemd-networkd[1385]: cali578307a0250: Link UP Jan 13 20:18:00.952865 systemd-networkd[1385]: cali578307a0250: Gained carrier Jan 13 20:18:00.964467 kubelet[2609]: I0113 20:18:00.964399 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b9g79" podStartSLOduration=1.533269088 podStartE2EDuration="11.964383133s" podCreationTimestamp="2025-01-13 20:17:49 +0000 UTC" firstStartedPulling="2025-01-13 20:17:49.740045232 +0000 UTC m=+36.248479718" lastFinishedPulling="2025-01-13 20:18:00.171159277 +0000 UTC m=+46.679593763" observedRunningTime="2025-01-13 20:18:00.753438646 +0000 UTC m=+47.261873132" watchObservedRunningTime="2025-01-13 20:18:00.964383133 +0000 UTC m=+47.472817619" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.776 [INFO][3544] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.825 [INFO][3544] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dp8kg-eth0 csi-node-driver- calico-system fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85 673 0 2025-01-13 20:17:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dp8kg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali578307a0250 [] []}} ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Namespace="calico-system" Pod="csi-node-driver-dp8kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--dp8kg-" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.825 [INFO][3544] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Namespace="calico-system" Pod="csi-node-driver-dp8kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--dp8kg-eth0" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.902 [INFO][3579] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" HandleID="k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Workload="localhost-k8s-csi--node--driver--dp8kg-eth0" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.918 [INFO][3579] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" HandleID="k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Workload="localhost-k8s-csi--node--driver--dp8kg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a1cc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dp8kg", "timestamp":"2025-01-13 20:18:00.902309705 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.918 [INFO][3579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.918 [INFO][3579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.918 [INFO][3579] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.920 [INFO][3579] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.924 [INFO][3579] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.928 [INFO][3579] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.930 [INFO][3579] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.932 [INFO][3579] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.932 [INFO][3579] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.934 [INFO][3579] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.939 [INFO][3579] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.943 [INFO][3579] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.943 [INFO][3579] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" host="localhost" Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.943 [INFO][3579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:18:00.964756 containerd[1450]: 2025-01-13 20:18:00.943 [INFO][3579] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" HandleID="k8s-pod-network.8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Workload="localhost-k8s-csi--node--driver--dp8kg-eth0" Jan 13 20:18:00.965217 containerd[1450]: 2025-01-13 20:18:00.946 [INFO][3544] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Namespace="calico-system" Pod="csi-node-driver-dp8kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--dp8kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dp8kg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dp8kg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali578307a0250", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:18:00.965217 containerd[1450]: 2025-01-13 20:18:00.946 [INFO][3544] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Namespace="calico-system" Pod="csi-node-driver-dp8kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--dp8kg-eth0" Jan 13 20:18:00.965217 containerd[1450]: 2025-01-13 20:18:00.946 [INFO][3544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali578307a0250 ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Namespace="calico-system" Pod="csi-node-driver-dp8kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--dp8kg-eth0" Jan 13 20:18:00.965217 containerd[1450]: 2025-01-13 20:18:00.953 [INFO][3544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Namespace="calico-system" Pod="csi-node-driver-dp8kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--dp8kg-eth0" Jan 13 20:18:00.965217 containerd[1450]: 2025-01-13 20:18:00.953 [INFO][3544] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Namespace="calico-system" Pod="csi-node-driver-dp8kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--dp8kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dp8kg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 17, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b", Pod:"csi-node-driver-dp8kg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali578307a0250", MAC:"4a:17:08:aa:18:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:18:00.965217 containerd[1450]: 2025-01-13 20:18:00.962 [INFO][3544] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b" Namespace="calico-system" Pod="csi-node-driver-dp8kg" WorkloadEndpoint="localhost-k8s-csi--node--driver--dp8kg-eth0" Jan 13 20:18:00.980642 containerd[1450]: time="2025-01-13T20:18:00.980559413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:18:00.980642 containerd[1450]: time="2025-01-13T20:18:00.980608373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:18:00.980642 containerd[1450]: time="2025-01-13T20:18:00.980630653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:18:00.982132 containerd[1450]: time="2025-01-13T20:18:00.980713534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:18:01.000950 systemd[1]: Started cri-containerd-8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b.scope - libcontainer container 8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b. Jan 13 20:18:01.018131 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:18:01.036270 containerd[1450]: time="2025-01-13T20:18:01.036219286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dp8kg,Uid:fd0ab3f1-0b07-4a49-9d5d-91eb72e39d85,Namespace:calico-system,Attempt:4,} returns sandbox id \"8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b\"" Jan 13 20:18:01.038573 containerd[1450]: time="2025-01-13T20:18:01.038536378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:18:01.082746 systemd[1]: run-netns-cni\x2dd38ae5dc\x2dbf67\x2d10ec\x2d1c1b\x2dbdde1b9082ac.mount: Deactivated successfully. Jan 13 20:18:01.745288 kubelet[2609]: E0113 20:18:01.745243 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:02.261241 systemd-networkd[1385]: cali578307a0250: Gained IPv6LL Jan 13 20:18:03.056923 containerd[1450]: time="2025-01-13T20:18:03.056856740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:03.057450 containerd[1450]: time="2025-01-13T20:18:03.057395903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 20:18:03.058825 containerd[1450]: time="2025-01-13T20:18:03.058430748Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:03.068467 containerd[1450]: time="2025-01-13T20:18:03.068415715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:03.069312 containerd[1450]: time="2025-01-13T20:18:03.069276159Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.030703741s" Jan 13 20:18:03.069349 containerd[1450]: time="2025-01-13T20:18:03.069323199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 20:18:03.071279 containerd[1450]: time="2025-01-13T20:18:03.071240488Z" level=info msg="CreateContainer within sandbox \"8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:18:03.086488 containerd[1450]: time="2025-01-13T20:18:03.086444480Z" level=info msg="CreateContainer within sandbox \"8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b302802f9f4aac4e02a9038000da29406871f53886a33b0c91416ee6de212182\"" Jan 13 20:18:03.087294 containerd[1450]: time="2025-01-13T20:18:03.087267004Z" level=info msg="StartContainer for \"b302802f9f4aac4e02a9038000da29406871f53886a33b0c91416ee6de212182\"" Jan 13 20:18:03.116301 systemd[1]: Started cri-containerd-b302802f9f4aac4e02a9038000da29406871f53886a33b0c91416ee6de212182.scope - libcontainer container b302802f9f4aac4e02a9038000da29406871f53886a33b0c91416ee6de212182. Jan 13 20:18:03.147940 containerd[1450]: time="2025-01-13T20:18:03.147890011Z" level=info msg="StartContainer for \"b302802f9f4aac4e02a9038000da29406871f53886a33b0c91416ee6de212182\" returns successfully" Jan 13 20:18:03.148913 containerd[1450]: time="2025-01-13T20:18:03.148894336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:18:03.374707 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:35972.service - OpenSSH per-connection server daemon (10.0.0.1:35972). Jan 13 20:18:03.425147 sshd[3827]: Accepted publickey for core from 10.0.0.1 port 35972 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:03.426802 sshd-session[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:03.430730 systemd-logind[1426]: New session 14 of user core. Jan 13 20:18:03.440258 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:18:03.558192 sshd[3829]: Connection closed by 10.0.0.1 port 35972 Jan 13 20:18:03.558698 sshd-session[3827]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:03.561405 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:35972.service: Deactivated successfully. Jan 13 20:18:03.564437 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:18:03.566093 systemd-logind[1426]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:18:03.566967 systemd-logind[1426]: Removed session 14. Jan 13 20:18:04.250988 containerd[1450]: time="2025-01-13T20:18:04.250941291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:04.251842 containerd[1450]: time="2025-01-13T20:18:04.251523574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 20:18:04.252623 containerd[1450]: time="2025-01-13T20:18:04.252588259Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:04.256692 containerd[1450]: time="2025-01-13T20:18:04.256631358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:18:04.257429 containerd[1450]: time="2025-01-13T20:18:04.257257841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.108265905s" Jan 13 20:18:04.257429 containerd[1450]: time="2025-01-13T20:18:04.257290601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 20:18:04.259807 containerd[1450]: time="2025-01-13T20:18:04.259767173Z" level=info msg="CreateContainer within sandbox \"8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:18:04.273429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844885483.mount: Deactivated successfully. Jan 13 20:18:04.282771 containerd[1450]: time="2025-01-13T20:18:04.282729600Z" level=info msg="CreateContainer within sandbox \"8a3b5544659e5f99679e2564cf17e0cfdce0e56a7a6a69a2ff72aeb7702dfd1b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f949e1b154621ac1a18273863e7cc82f2911e792a6dd205e89a38bde70d8f352\"" Jan 13 20:18:04.283156 containerd[1450]: time="2025-01-13T20:18:04.283135641Z" level=info msg="StartContainer for \"f949e1b154621ac1a18273863e7cc82f2911e792a6dd205e89a38bde70d8f352\"" Jan 13 20:18:04.303378 systemd[1]: run-containerd-runc-k8s.io-f949e1b154621ac1a18273863e7cc82f2911e792a6dd205e89a38bde70d8f352-runc.ZjxoYL.mount: Deactivated successfully. Jan 13 20:18:04.311238 systemd[1]: Started cri-containerd-f949e1b154621ac1a18273863e7cc82f2911e792a6dd205e89a38bde70d8f352.scope - libcontainer container f949e1b154621ac1a18273863e7cc82f2911e792a6dd205e89a38bde70d8f352. Jan 13 20:18:04.343063 containerd[1450]: time="2025-01-13T20:18:04.343008881Z" level=info msg="StartContainer for \"f949e1b154621ac1a18273863e7cc82f2911e792a6dd205e89a38bde70d8f352\" returns successfully" Jan 13 20:18:04.692023 kubelet[2609]: I0113 20:18:04.691964 2609 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:18:04.694520 kubelet[2609]: I0113 20:18:04.694477 2609 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:18:04.764178 kubelet[2609]: I0113 20:18:04.764026 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dp8kg" podStartSLOduration=12.543954253 podStartE2EDuration="15.764008843s" podCreationTimestamp="2025-01-13 20:17:49 +0000 UTC" firstStartedPulling="2025-01-13 20:18:01.037859734 +0000 UTC m=+47.546294220" lastFinishedPulling="2025-01-13 20:18:04.257914324 +0000 UTC m=+50.766348810" observedRunningTime="2025-01-13 20:18:04.763825123 +0000 UTC m=+51.272259609" watchObservedRunningTime="2025-01-13 20:18:04.764008843 +0000 UTC m=+51.272443329" Jan 13 20:18:08.574209 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:35986.service - OpenSSH per-connection server daemon (10.0.0.1:35986). Jan 13 20:18:08.627314 sshd[4014]: Accepted publickey for core from 10.0.0.1 port 35986 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:08.629110 sshd-session[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:08.633103 systemd-logind[1426]: New session 15 of user core. Jan 13 20:18:08.641234 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:18:08.758791 sshd[4017]: Connection closed by 10.0.0.1 port 35986 Jan 13 20:18:08.759210 sshd-session[4014]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:08.769573 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:35986.service: Deactivated successfully. Jan 13 20:18:08.770976 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:18:08.772982 systemd-logind[1426]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:18:08.781507 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:35996.service - OpenSSH per-connection server daemon (10.0.0.1:35996). Jan 13 20:18:08.782378 systemd-logind[1426]: Removed session 15. Jan 13 20:18:08.822132 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 35996 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:08.823279 sshd-session[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:08.827151 systemd-logind[1426]: New session 16 of user core. Jan 13 20:18:08.836318 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:18:09.032286 sshd[4037]: Connection closed by 10.0.0.1 port 35996 Jan 13 20:18:09.033134 sshd-session[4033]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:09.041989 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:35996.service: Deactivated successfully. Jan 13 20:18:09.043735 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:18:09.045146 systemd-logind[1426]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:18:09.053539 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:36000.service - OpenSSH per-connection server daemon (10.0.0.1:36000). Jan 13 20:18:09.054762 systemd-logind[1426]: Removed session 16. Jan 13 20:18:09.094682 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 36000 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:09.096007 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:09.100000 systemd-logind[1426]: New session 17 of user core. Jan 13 20:18:09.108248 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:18:10.650782 sshd[4051]: Connection closed by 10.0.0.1 port 36000 Jan 13 20:18:10.653329 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:10.662479 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:36000.service: Deactivated successfully. Jan 13 20:18:10.665581 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:18:10.667180 systemd-logind[1426]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:18:10.674437 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:36016.service - OpenSSH per-connection server daemon (10.0.0.1:36016). Jan 13 20:18:10.677777 systemd-logind[1426]: Removed session 17. Jan 13 20:18:10.716654 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 36016 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:10.718150 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:10.722167 systemd-logind[1426]: New session 18 of user core. Jan 13 20:18:10.737296 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:18:10.972886 sshd[4121]: Connection closed by 10.0.0.1 port 36016 Jan 13 20:18:10.973365 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:10.987117 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:36016.service: Deactivated successfully. Jan 13 20:18:10.988749 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:18:10.990151 systemd-logind[1426]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:18:10.991425 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:36024.service - OpenSSH per-connection server daemon (10.0.0.1:36024). Jan 13 20:18:10.992401 systemd-logind[1426]: Removed session 18. Jan 13 20:18:11.035355 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 36024 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:11.036710 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:11.041158 systemd-logind[1426]: New session 19 of user core. Jan 13 20:18:11.047295 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:18:11.174137 sshd[4135]: Connection closed by 10.0.0.1 port 36024 Jan 13 20:18:11.174504 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:11.177606 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:36024.service: Deactivated successfully. Jan 13 20:18:11.179269 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:18:11.180839 systemd-logind[1426]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:18:11.182052 systemd-logind[1426]: Removed session 19. Jan 13 20:18:13.579613 containerd[1450]: time="2025-01-13T20:18:13.579576540Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\"" Jan 13 20:18:13.579998 containerd[1450]: time="2025-01-13T20:18:13.579686460Z" level=info msg="TearDown network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" successfully" Jan 13 20:18:13.579998 containerd[1450]: time="2025-01-13T20:18:13.579699580Z" level=info msg="StopPodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" returns successfully" Jan 13 20:18:13.580755 containerd[1450]: time="2025-01-13T20:18:13.580225343Z" level=info msg="RemovePodSandbox for \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\"" Jan 13 20:18:13.580755 containerd[1450]: time="2025-01-13T20:18:13.580401623Z" level=info msg="Forcibly stopping sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\"" Jan 13 20:18:13.580755 containerd[1450]: time="2025-01-13T20:18:13.580687425Z" level=info msg="TearDown network for sandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" successfully" Jan 13 20:18:13.599226 containerd[1450]: time="2025-01-13T20:18:13.599172422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:18:13.600984 containerd[1450]: time="2025-01-13T20:18:13.600917790Z" level=info msg="RemovePodSandbox \"3d03b3004e20e7b43acabc087b6909de9165debd7a4891cb5a1ddd6b646fe5c6\" returns successfully" Jan 13 20:18:13.601631 containerd[1450]: time="2025-01-13T20:18:13.601605353Z" level=info msg="StopPodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\"" Jan 13 20:18:13.601715 containerd[1450]: time="2025-01-13T20:18:13.601700353Z" level=info msg="TearDown network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" successfully" Jan 13 20:18:13.601715 containerd[1450]: time="2025-01-13T20:18:13.601713433Z" level=info msg="StopPodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" returns successfully" Jan 13 20:18:13.602047 containerd[1450]: time="2025-01-13T20:18:13.602010954Z" level=info msg="RemovePodSandbox for \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\"" Jan 13 20:18:13.602047 containerd[1450]: time="2025-01-13T20:18:13.602040755Z" level=info msg="Forcibly stopping sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\"" Jan 13 20:18:13.602131 containerd[1450]: time="2025-01-13T20:18:13.602116195Z" level=info msg="TearDown network for sandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" successfully" Jan 13 20:18:13.608212 containerd[1450]: time="2025-01-13T20:18:13.608176700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:18:13.608286 containerd[1450]: time="2025-01-13T20:18:13.608230341Z" level=info msg="RemovePodSandbox \"bb6ad7b8a0e616d8fbb62bb4267da4850f031f997505e6faf2f91b8d8f2fa630\" returns successfully" Jan 13 20:18:13.608571 containerd[1450]: time="2025-01-13T20:18:13.608549862Z" level=info msg="StopPodSandbox for \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\"" Jan 13 20:18:13.608643 containerd[1450]: time="2025-01-13T20:18:13.608628342Z" level=info msg="TearDown network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\" successfully" Jan 13 20:18:13.608643 containerd[1450]: time="2025-01-13T20:18:13.608640822Z" level=info msg="StopPodSandbox for \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\" returns successfully" Jan 13 20:18:13.614032 containerd[1450]: time="2025-01-13T20:18:13.613799684Z" level=info msg="RemovePodSandbox for \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\"" Jan 13 20:18:13.614032 containerd[1450]: time="2025-01-13T20:18:13.613845684Z" level=info msg="Forcibly stopping sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\"" Jan 13 20:18:13.614032 containerd[1450]: time="2025-01-13T20:18:13.613921845Z" level=info msg="TearDown network for sandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\" successfully" Jan 13 20:18:13.616032 containerd[1450]: time="2025-01-13T20:18:13.615984813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:18:13.616117 containerd[1450]: time="2025-01-13T20:18:13.616042774Z" level=info msg="RemovePodSandbox \"7eeceded95a17d4fd0f356ef20fb7220ec2efc1b9cdaa59a5f949bac650b4f30\" returns successfully" Jan 13 20:18:13.616557 containerd[1450]: time="2025-01-13T20:18:13.616478575Z" level=info msg="StopPodSandbox for \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\"" Jan 13 20:18:13.616557 containerd[1450]: time="2025-01-13T20:18:13.616577976Z" level=info msg="TearDown network for sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\" successfully" Jan 13 20:18:13.616557 containerd[1450]: time="2025-01-13T20:18:13.616588456Z" level=info msg="StopPodSandbox for \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\" returns successfully" Jan 13 20:18:13.620977 containerd[1450]: time="2025-01-13T20:18:13.620220231Z" level=info msg="RemovePodSandbox for \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\"" Jan 13 20:18:13.620977 containerd[1450]: time="2025-01-13T20:18:13.620257031Z" level=info msg="Forcibly stopping sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\"" Jan 13 20:18:13.620977 containerd[1450]: time="2025-01-13T20:18:13.620334632Z" level=info msg="TearDown network for sandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\" successfully" Jan 13 20:18:13.622805 containerd[1450]: time="2025-01-13T20:18:13.622773002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:18:13.622927 containerd[1450]: time="2025-01-13T20:18:13.622909362Z" level=info msg="RemovePodSandbox \"617b0257d17afedc9cef27c84503471a08ad7ed17b5d2df6390557d73f4514ba\" returns successfully" Jan 13 20:18:16.184937 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:59860.service - OpenSSH per-connection server daemon (10.0.0.1:59860). Jan 13 20:18:16.223680 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 59860 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:16.224777 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:16.228633 systemd-logind[1426]: New session 20 of user core. Jan 13 20:18:16.240259 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:18:16.349377 sshd[4279]: Connection closed by 10.0.0.1 port 59860 Jan 13 20:18:16.349701 sshd-session[4277]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:16.352724 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:59860.service: Deactivated successfully. Jan 13 20:18:16.354601 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:18:16.355262 systemd-logind[1426]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:18:16.356291 systemd-logind[1426]: Removed session 20. Jan 13 20:18:17.258108 kernel: bpftool[4333]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:18:17.402114 systemd-networkd[1385]: vxlan.calico: Link UP Jan 13 20:18:17.402123 systemd-networkd[1385]: vxlan.calico: Gained carrier Jan 13 20:18:19.350229 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL Jan 13 20:18:21.363740 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:59874.service - OpenSSH per-connection server daemon (10.0.0.1:59874). Jan 13 20:18:21.416395 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 59874 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:21.417867 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:21.421690 systemd-logind[1426]: New session 21 of user core. Jan 13 20:18:21.434292 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:18:21.575390 sshd[4478]: Connection closed by 10.0.0.1 port 59874 Jan 13 20:18:21.575735 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:21.578965 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:59874.service: Deactivated successfully. Jan 13 20:18:21.581358 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:18:21.581943 systemd-logind[1426]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:18:21.582698 systemd-logind[1426]: Removed session 21. Jan 13 20:18:22.587881 kubelet[2609]: E0113 20:18:22.587839 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:26.586768 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:35992.service - OpenSSH per-connection server daemon (10.0.0.1:35992). Jan 13 20:18:26.625979 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 35992 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:26.627116 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:26.631167 systemd-logind[1426]: New session 22 of user core. Jan 13 20:18:26.639221 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:18:26.763569 sshd[4507]: Connection closed by 10.0.0.1 port 35992 Jan 13 20:18:26.764064 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:26.767158 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:35992.service: Deactivated successfully. Jan 13 20:18:26.768850 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:18:26.769437 systemd-logind[1426]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:18:26.770185 systemd-logind[1426]: Removed session 22. Jan 13 20:18:31.774591 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:35998.service - OpenSSH per-connection server daemon (10.0.0.1:35998). Jan 13 20:18:31.817555 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 35998 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:31.818912 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:31.822287 systemd-logind[1426]: New session 23 of user core. Jan 13 20:18:31.829274 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:18:31.942277 sshd[4533]: Connection closed by 10.0.0.1 port 35998 Jan 13 20:18:31.942601 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:31.946134 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:35998.service: Deactivated successfully. Jan 13 20:18:31.947931 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:18:31.948575 systemd-logind[1426]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:18:31.949443 systemd-logind[1426]: Removed session 23. Jan 13 20:18:34.588367 kubelet[2609]: E0113 20:18:34.588317 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:18:36.953821 systemd[1]: Started sshd@23-10.0.0.82:22-10.0.0.1:49984.service - OpenSSH per-connection server daemon (10.0.0.1:49984). Jan 13 20:18:36.994065 sshd[4545]: Accepted publickey for core from 10.0.0.1 port 49984 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:18:36.995513 sshd-session[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:36.999586 systemd-logind[1426]: New session 24 of user core. Jan 13 20:18:37.006279 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 20:18:37.136206 sshd[4547]: Connection closed by 10.0.0.1 port 49984 Jan 13 20:18:37.136561 sshd-session[4545]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:37.139952 systemd[1]: sshd@23-10.0.0.82:22-10.0.0.1:49984.service: Deactivated successfully. Jan 13 20:18:37.145248 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 20:18:37.146177 systemd-logind[1426]: Session 24 logged out. Waiting for processes to exit. Jan 13 20:18:37.147140 systemd-logind[1426]: Removed session 24.