Jan 13 20:16:49.895579 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:16:49.895608 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:16:49.895622 kernel: KASLR enabled Jan 13 20:16:49.895630 kernel: efi: EFI v2.7 by EDK II Jan 13 20:16:49.895638 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132303d98 Jan 13 20:16:49.895646 kernel: random: crng init done Jan 13 20:16:49.895656 kernel: secureboot: Secure boot disabled Jan 13 20:16:49.895665 kernel: ACPI: Early table checksum verification disabled Jan 13 20:16:49.895674 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 13 20:16:49.895682 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:16:49.895693 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895701 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895710 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895719 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895730 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895741 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895750 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895759 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895768 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:16:49.895777 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:16:49.895822 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 20:16:49.895831 kernel: NUMA: Failed to initialise from firmware Jan 13 20:16:49.895853 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:49.895863 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Jan 13 20:16:49.895872 kernel: Zone ranges: Jan 13 20:16:49.895881 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:16:49.895893 kernel: DMA32 empty Jan 13 20:16:49.895902 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 13 20:16:49.895912 kernel: Movable zone start for each node Jan 13 20:16:49.895920 kernel: Early memory node ranges Jan 13 20:16:49.895929 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 13 20:16:49.895938 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 13 20:16:49.895947 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 13 20:16:49.895956 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 13 20:16:49.895965 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 13 20:16:49.895974 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:16:49.895983 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 13 20:16:49.895994 kernel: psci: probing for conduit method from ACPI. Jan 13 20:16:49.896003 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:16:49.896012 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:16:49.896025 kernel: psci: Trusted OS migration not required Jan 13 20:16:49.896087 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:16:49.896102 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:16:49.896115 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:16:49.896125 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:16:49.896134 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:16:49.896144 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:16:49.896154 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:16:49.896164 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:16:49.896173 kernel: CPU features: detected: Spectre-v4 Jan 13 20:16:49.896183 kernel: CPU features: detected: Spectre-BHB Jan 13 20:16:49.896192 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:16:49.896202 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:16:49.896212 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:16:49.896223 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:16:49.896234 kernel: alternatives: applying boot alternatives Jan 13 20:16:49.896245 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:49.896255 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:16:49.896265 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:16:49.896274 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:16:49.896284 kernel: Fallback order for Node 0: 0 Jan 13 20:16:49.896293 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 13 20:16:49.896303 kernel: Policy zone: Normal Jan 13 20:16:49.896312 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:16:49.896322 kernel: software IO TLB: area num 2. Jan 13 20:16:49.896333 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 13 20:16:49.896344 kernel: Memory: 3881336K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 214664K reserved, 0K cma-reserved) Jan 13 20:16:49.896353 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:16:49.896363 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:16:49.896373 kernel: rcu: RCU event tracing is enabled. Jan 13 20:16:49.896383 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:16:49.896393 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:16:49.896402 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:16:49.896412 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:16:49.896422 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:16:49.896431 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:16:49.896443 kernel: GICv3: 256 SPIs implemented Jan 13 20:16:49.896453 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:16:49.896462 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:16:49.896472 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:16:49.896482 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:16:49.896491 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:16:49.896501 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:16:49.896511 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:16:49.896520 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 13 20:16:49.896530 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 13 20:16:49.896540 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:16:49.896552 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:49.896561 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:16:49.896572 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:16:49.896581 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:16:49.896591 kernel: Console: colour dummy device 80x25 Jan 13 20:16:49.896646 kernel: ACPI: Core revision 20230628 Jan 13 20:16:49.896660 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:16:49.896670 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:16:49.896680 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:16:49.896690 kernel: landlock: Up and running. Jan 13 20:16:49.896703 kernel: SELinux: Initializing. Jan 13 20:16:49.896713 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:49.896723 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:16:49.896734 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:49.896744 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:16:49.896754 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:16:49.896765 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:16:49.896774 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:16:49.896839 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:16:49.896905 kernel: Remapping and enabling EFI services. Jan 13 20:16:49.896917 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:16:49.896927 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:16:49.896937 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:16:49.896967 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 13 20:16:49.896978 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:16:49.896988 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:16:49.896998 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:16:49.897008 kernel: SMP: Total of 2 processors activated. Jan 13 20:16:49.897017 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:16:49.897031 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:16:49.897041 kernel: CPU features: detected: Common not Private translations Jan 13 20:16:49.897059 kernel: CPU features: detected: CRC32 instructions Jan 13 20:16:49.897071 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:16:49.897082 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:16:49.897092 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:16:49.897102 kernel: CPU features: detected: Privileged Access Never Jan 13 20:16:49.897113 kernel: CPU features: detected: RAS Extension Support Jan 13 20:16:49.897124 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:16:49.897137 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:16:49.897147 kernel: alternatives: applying system-wide alternatives Jan 13 20:16:49.897157 kernel: devtmpfs: initialized Jan 13 20:16:49.897168 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:16:49.897178 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:16:49.897189 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:16:49.897199 kernel: SMBIOS 3.0.0 present. Jan 13 20:16:49.897212 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 13 20:16:49.897222 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:16:49.897233 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:16:49.897244 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:16:49.897254 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:16:49.897265 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:16:49.897275 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jan 13 20:16:49.897312 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:16:49.897323 kernel: cpuidle: using governor menu Jan 13 20:16:49.897338 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:16:49.897349 kernel: ASID allocator initialised with 32768 entries Jan 13 20:16:49.897359 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:16:49.897370 kernel: Serial: AMBA PL011 UART driver Jan 13 20:16:49.897380 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:16:49.897391 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:16:49.897401 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:16:49.897412 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:16:49.897804 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:16:49.897822 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:16:49.897833 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:16:49.897882 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:16:49.897904 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:16:49.897914 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:16:49.897925 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:16:49.897936 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:16:49.897946 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:16:49.897957 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:16:49.897971 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:16:49.897982 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:16:49.897998 kernel: ACPI: Interpreter enabled Jan 13 20:16:49.898010 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:16:49.898021 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:16:49.898032 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:16:49.898043 kernel: printk: console [ttyAMA0] enabled Jan 13 20:16:49.898053 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:16:49.898224 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:16:49.898355 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:16:49.898427 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:16:49.898501 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:16:49.898565 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:16:49.898575 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:16:49.898583 kernel: PCI host bridge to bus 0000:00 Jan 13 20:16:49.898657 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:16:49.898721 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:16:49.898779 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:49.899319 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:16:49.899416 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:16:49.899500 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 13 20:16:49.899565 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 13 20:16:49.899635 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:49.899712 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.899779 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 13 20:16:49.899894 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.899964 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 13 20:16:49.900036 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.900104 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 13 20:16:49.900182 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.900246 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 13 20:16:49.900316 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.900381 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 13 20:16:49.900453 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.900520 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 13 20:16:49.900591 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.900719 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 13 20:16:49.900919 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.901004 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 13 20:16:49.901122 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:16:49.901241 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 13 20:16:49.901360 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 13 20:16:49.901432 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 13 20:16:49.901512 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:49.901580 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 13 20:16:49.901647 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:49.904019 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:49.904133 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:16:49.904203 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 13 20:16:49.904283 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 13 20:16:49.904352 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 13 20:16:49.904419 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 20:16:49.904495 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 13 20:16:49.904566 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 20:16:49.904646 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:16:49.904713 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 20:16:49.904812 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 13 20:16:49.906261 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 13 20:16:49.906352 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:49.906436 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:16:49.906513 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 13 20:16:49.906583 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 13 20:16:49.906649 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:16:49.906724 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 20:16:49.906809 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:49.906900 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:16:49.906977 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 20:16:49.907043 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 20:16:49.907107 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 20:16:49.907185 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:16:49.907250 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:49.907363 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:16:49.907437 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:16:49.907503 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 20:16:49.907572 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 20:16:49.907641 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:16:49.907718 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 20:16:49.907790 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jan 13 20:16:49.910123 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:16:49.911114 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:49.911250 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:16:49.911338 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:16:49.911407 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:49.911499 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:16:49.911597 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:16:49.911666 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:49.911731 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:16:49.911961 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:16:49.912038 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:49.912132 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:16:49.912231 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 13 20:16:49.912299 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:49.912438 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 13 20:16:49.912548 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:49.912621 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 13 20:16:49.912714 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:49.912828 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 13 20:16:49.913985 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:49.914065 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 13 20:16:49.914131 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:49.914196 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 13 20:16:49.914261 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:49.914334 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 13 20:16:49.914400 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:49.914467 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 13 20:16:49.914531 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:49.914597 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 13 20:16:49.914661 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:49.914731 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 13 20:16:49.914830 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 13 20:16:49.918001 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 13 20:16:49.918084 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:16:49.918155 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 13 20:16:49.918223 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:16:49.918292 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 13 20:16:49.918357 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:16:49.918425 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 13 20:16:49.918498 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:16:49.918566 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 13 20:16:49.918632 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:16:49.918697 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 13 20:16:49.918761 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:16:49.918936 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 13 20:16:49.919017 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:16:49.919091 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 13 20:16:49.919162 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:16:49.919228 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 13 20:16:49.919291 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 13 20:16:49.919358 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 13 20:16:49.919429 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 13 20:16:49.919497 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:16:49.919563 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 13 20:16:49.919629 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 13 20:16:49.919698 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:16:49.919762 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 20:16:49.920937 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:49.921125 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 13 20:16:49.921205 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 13 20:16:49.921295 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:16:49.921387 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 20:16:49.921497 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:49.921581 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:16:49.921662 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 13 20:16:49.921766 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 13 20:16:49.922947 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:16:49.923071 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 20:16:49.923144 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:49.923216 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:16:49.923285 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 13 20:16:49.923347 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:16:49.923409 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 20:16:49.923472 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:49.923542 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 13 20:16:49.923611 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 13 20:16:49.923676 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:16:49.923739 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 20:16:49.923924 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:49.924031 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 13 20:16:49.924952 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 13 20:16:49.925030 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 13 20:16:49.925095 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:16:49.925162 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 20:16:49.925224 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:49.925296 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 13 20:16:49.925362 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 13 20:16:49.925427 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 13 20:16:49.925494 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 13 20:16:49.925571 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:16:49.925654 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 20:16:49.925722 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:49.925805 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 13 20:16:49.925889 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:16:49.925954 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 20:16:49.926020 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:49.926094 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 13 20:16:49.926161 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 13 20:16:49.926223 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 20:16:49.926290 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:49.926356 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:16:49.926414 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:16:49.926471 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:16:49.926544 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:16:49.926604 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 20:16:49.926664 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:16:49.926736 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 13 20:16:49.926883 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 20:16:49.926964 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:16:49.927036 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 13 20:16:49.927099 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 20:16:49.927160 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:16:49.927229 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:16:49.927293 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 20:16:49.927355 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:16:49.927434 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 13 20:16:49.927497 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 20:16:49.927556 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:16:49.927623 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 13 20:16:49.927685 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 20:16:49.927745 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:16:49.927833 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 13 20:16:49.930078 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 20:16:49.930155 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:16:49.930231 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 13 20:16:49.930291 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 20:16:49.930349 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:16:49.930415 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 13 20:16:49.930473 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 20:16:49.930531 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:16:49.930543 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:16:49.930551 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:16:49.930559 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:16:49.930567 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:16:49.930575 kernel: iommu: Default domain type: Translated Jan 13 20:16:49.930582 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:16:49.930590 kernel: efivars: Registered efivars operations Jan 13 20:16:49.930599 kernel: vgaarb: loaded Jan 13 20:16:49.930607 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:16:49.930616 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:16:49.930624 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:16:49.930632 kernel: pnp: PnP ACPI init Jan 13 20:16:49.930703 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:16:49.930714 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:16:49.930722 kernel: NET: Registered PF_INET protocol family Jan 13 20:16:49.930730 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:16:49.930738 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:16:49.930748 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:16:49.930756 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:16:49.930763 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:16:49.930771 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:16:49.930779 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:49.930800 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:16:49.930808 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:16:49.932154 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 20:16:49.932178 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:16:49.932192 kernel: kvm [1]: HYP mode not available Jan 13 20:16:49.932200 kernel: Initialise system trusted keyrings Jan 13 20:16:49.932208 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:16:49.932287 kernel: Key type asymmetric registered Jan 13 20:16:49.932300 kernel: Asymmetric key parser 'x509' registered Jan 13 20:16:49.932308 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:16:49.932316 kernel: io scheduler mq-deadline registered Jan 13 20:16:49.932323 kernel: io scheduler kyber registered Jan 13 20:16:49.932331 kernel: io scheduler bfq registered Jan 13 20:16:49.932343 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:16:49.932435 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 13 20:16:49.932506 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 13 20:16:49.932580 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.932659 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 13 20:16:49.932728 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 13 20:16:49.932828 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.934072 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 13 20:16:49.934153 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 13 20:16:49.934255 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.934328 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 13 20:16:49.934395 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 13 20:16:49.934497 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.934614 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 13 20:16:49.934696 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 13 20:16:49.936291 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.936406 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 13 20:16:49.936475 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 13 20:16:49.936540 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.936621 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 13 20:16:49.936688 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 13 20:16:49.936752 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.936884 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 13 20:16:49.936956 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 13 20:16:49.937021 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.937036 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 20:16:49.937104 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 13 20:16:49.937212 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 13 20:16:49.937278 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:16:49.937288 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:16:49.937297 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:16:49.937305 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:16:49.937382 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 13 20:16:49.937458 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 20:16:49.937534 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 13 20:16:49.937545 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:16:49.937553 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:16:49.937619 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 13 20:16:49.937630 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 13 20:16:49.937640 kernel: thunder_xcv, ver 1.0 Jan 13 20:16:49.937650 kernel: thunder_bgx, ver 1.0 Jan 13 20:16:49.937658 kernel: nicpf, ver 1.0 Jan 13 20:16:49.937665 kernel: nicvf, ver 1.0 Jan 13 20:16:49.937742 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:16:49.937820 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:16:49 UTC (1736799409) Jan 13 20:16:49.937831 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:16:49.937839 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:16:49.937880 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:16:49.937891 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:16:49.937899 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:16:49.937907 kernel: Segment Routing with IPv6 Jan 13 20:16:49.937915 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:16:49.937923 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:16:49.937931 kernel: Key type dns_resolver registered Jan 13 20:16:49.937938 kernel: registered taskstats version 1 Jan 13 20:16:49.937946 kernel: Loading compiled-in X.509 certificates Jan 13 20:16:49.937954 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:16:49.937963 kernel: Key type .fscrypt registered Jan 13 20:16:49.937971 kernel: Key type fscrypt-provisioning registered Jan 13 20:16:49.937979 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:16:49.937986 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:16:49.938001 kernel: ima: No architecture policies found Jan 13 20:16:49.938009 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:16:49.938017 kernel: clk: Disabling unused clocks Jan 13 20:16:49.938025 kernel: Freeing unused kernel memory: 39680K Jan 13 20:16:49.938033 kernel: Run /init as init process Jan 13 20:16:49.938042 kernel: with arguments: Jan 13 20:16:49.938050 kernel: /init Jan 13 20:16:49.938058 kernel: with environment: Jan 13 20:16:49.938065 kernel: HOME=/ Jan 13 20:16:49.938073 kernel: TERM=linux Jan 13 20:16:49.938080 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:16:49.938090 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:49.938100 systemd[1]: Detected virtualization kvm. Jan 13 20:16:49.938110 systemd[1]: Detected architecture arm64. Jan 13 20:16:49.938118 systemd[1]: Running in initrd. Jan 13 20:16:49.938126 systemd[1]: No hostname configured, using default hostname. Jan 13 20:16:49.938134 systemd[1]: Hostname set to . Jan 13 20:16:49.938142 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:49.938151 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:16:49.938159 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:49.938167 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:49.938178 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:16:49.938186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:49.938195 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:16:49.938204 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:16:49.938213 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:16:49.938222 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:16:49.938231 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:49.938240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:49.938248 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:16:49.938256 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:49.938264 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:49.938272 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:16:49.938280 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:49.938289 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:49.938297 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:49.938307 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:49.938317 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:49.938325 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:49.938333 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:49.938341 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:16:49.938350 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:16:49.938358 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:49.938366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:16:49.938376 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:16:49.938384 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:49.938393 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:49.938401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:49.938409 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:49.938439 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 20:16:49.938463 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:49.938472 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:16:49.938481 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:49.938491 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:16:49.938499 kernel: Bridge firewalling registered Jan 13 20:16:49.938507 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:49.938515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:49.938524 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:49.938533 systemd-journald[238]: Journal started Jan 13 20:16:49.938553 systemd-journald[238]: Runtime Journal (/run/log/journal/a1c511ed4e1348c9a472d41e2a6ba61a) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:49.909028 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 20:16:49.926038 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 20:16:49.942870 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:49.942907 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:49.942949 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:49.956157 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:49.969043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:49.970469 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:49.971879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:49.983136 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:16:49.984974 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:49.986996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:49.996133 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:49.999681 dracut-cmdline[271]: dracut-dracut-053 Jan 13 20:16:50.004228 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:16:50.026317 systemd-resolved[277]: Positive Trust Anchors: Jan 13 20:16:50.026393 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:50.026424 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:50.032111 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 13 20:16:50.033227 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:50.034187 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:50.109882 kernel: SCSI subsystem initialized Jan 13 20:16:50.114916 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:16:50.122896 kernel: iscsi: registered transport (tcp) Jan 13 20:16:50.136888 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:16:50.136952 kernel: QLogic iSCSI HBA Driver Jan 13 20:16:50.186557 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:50.194319 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:16:50.217331 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:16:50.217435 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:16:50.217470 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:16:50.268919 kernel: raid6: neonx8 gen() 15563 MB/s Jan 13 20:16:50.285924 kernel: raid6: neonx4 gen() 15584 MB/s Jan 13 20:16:50.302911 kernel: raid6: neonx2 gen() 13204 MB/s Jan 13 20:16:50.319912 kernel: raid6: neonx1 gen() 10434 MB/s Jan 13 20:16:50.336909 kernel: raid6: int64x8 gen() 6912 MB/s Jan 13 20:16:50.353897 kernel: raid6: int64x4 gen() 7327 MB/s Jan 13 20:16:50.370887 kernel: raid6: int64x2 gen() 6099 MB/s Jan 13 20:16:50.387912 kernel: raid6: int64x1 gen() 5030 MB/s Jan 13 20:16:50.388007 kernel: raid6: using algorithm neonx4 gen() 15584 MB/s Jan 13 20:16:50.404909 kernel: raid6: .... xor() 12273 MB/s, rmw enabled Jan 13 20:16:50.404981 kernel: raid6: using neon recovery algorithm Jan 13 20:16:50.409929 kernel: xor: measuring software checksum speed Jan 13 20:16:50.409977 kernel: 8regs : 19721 MB/sec Jan 13 20:16:50.409997 kernel: 32regs : 19660 MB/sec Jan 13 20:16:50.410951 kernel: arm64_neon : 27052 MB/sec Jan 13 20:16:50.411006 kernel: xor: using function: arm64_neon (27052 MB/sec) Jan 13 20:16:50.460894 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:16:50.475352 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:50.482088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:50.496922 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 13 20:16:50.500387 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:50.510869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:16:50.525709 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 13 20:16:50.559537 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:50.565050 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:50.614112 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:50.625389 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:16:50.642361 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:50.647153 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:50.648492 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:50.649126 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:50.657031 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:16:50.673625 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:50.714143 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:16:50.723058 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:50.723704 kernel: ACPI: bus type USB registered Jan 13 20:16:50.723731 kernel: usbcore: registered new interface driver usbfs Jan 13 20:16:50.723742 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 13 20:16:50.725044 kernel: usbcore: registered new interface driver hub Jan 13 20:16:50.725895 kernel: usbcore: registered new device driver usb Jan 13 20:16:50.734429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:50.757609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:50.759562 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:50.760401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:50.760876 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:50.762971 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:50.772183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:50.790298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:50.796046 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:16:50.806906 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:50.817295 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:16:50.817397 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 13 20:16:50.817493 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:16:50.817572 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 13 20:16:50.817658 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:16:50.817668 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:16:50.817746 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:16:50.817862 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:16:50.817948 kernel: hub 1-0:1.0: USB hub found Jan 13 20:16:50.818054 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:16:50.818132 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:16:50.818227 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:16:50.818919 kernel: hub 2-0:1.0: USB hub found Jan 13 20:16:50.819024 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:16:50.819100 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 13 20:16:50.830134 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 13 20:16:50.830289 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 13 20:16:50.830374 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 13 20:16:50.830460 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:16:50.830541 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:16:50.830551 kernel: GPT:17805311 != 80003071 Jan 13 20:16:50.830560 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:16:50.830569 kernel: GPT:17805311 != 80003071 Jan 13 20:16:50.830578 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:16:50.830587 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:50.830597 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 13 20:16:50.835725 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:50.871134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 13 20:16:50.877450 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 13 20:16:50.882873 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (515) Jan 13 20:16:50.889927 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (519) Jan 13 20:16:50.901167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 13 20:16:50.901836 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 13 20:16:50.909936 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:50.920120 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:16:50.931330 disk-uuid[577]: Primary Header is updated. Jan 13 20:16:50.931330 disk-uuid[577]: Secondary Entries is updated. Jan 13 20:16:50.931330 disk-uuid[577]: Secondary Header is updated. Jan 13 20:16:50.941011 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:51.056932 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:16:51.298969 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 20:16:51.433898 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 20:16:51.434108 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 20:16:51.435878 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 20:16:51.490672 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 20:16:51.490976 kernel: usbcore: registered new interface driver usbhid Jan 13 20:16:51.490997 kernel: usbhid: USB HID core driver Jan 13 20:16:51.956106 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:16:51.957296 disk-uuid[578]: The operation has completed successfully. Jan 13 20:16:52.006486 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:16:52.007263 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:16:52.031606 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:16:52.035344 sh[592]: Success Jan 13 20:16:52.049008 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:16:52.107204 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:16:52.109880 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:16:52.116022 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:16:52.133858 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:16:52.133937 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:52.133956 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:16:52.133974 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:16:52.133991 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:16:52.141351 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:16:52.143541 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:16:52.144373 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:16:52.150036 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:16:52.152977 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:16:52.164538 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:52.164591 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:52.164605 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:52.169866 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:52.169923 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:52.182045 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:16:52.183067 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:52.190238 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:16:52.198085 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:16:52.288607 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:52.294812 ignition[685]: Ignition 2.20.0 Jan 13 20:16:52.294824 ignition[685]: Stage: fetch-offline Jan 13 20:16:52.297125 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:52.294894 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.298169 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:52.294903 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.295058 ignition[685]: parsed url from cmdline: "" Jan 13 20:16:52.295062 ignition[685]: no config URL provided Jan 13 20:16:52.295067 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:52.295074 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:52.295079 ignition[685]: failed to fetch config: resource requires networking Jan 13 20:16:52.295261 ignition[685]: Ignition finished successfully Jan 13 20:16:52.321201 systemd-networkd[779]: lo: Link UP Jan 13 20:16:52.321219 systemd-networkd[779]: lo: Gained carrier Jan 13 20:16:52.323295 systemd-networkd[779]: Enumeration completed Jan 13 20:16:52.323424 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:52.324927 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:52.324932 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:52.325135 systemd[1]: Reached target network.target - Network. Jan 13 20:16:52.328531 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:52.328535 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:52.329679 systemd-networkd[779]: eth0: Link UP Jan 13 20:16:52.329683 systemd-networkd[779]: eth0: Gained carrier Jan 13 20:16:52.329695 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:52.339354 systemd-networkd[779]: eth1: Link UP Jan 13 20:16:52.339365 systemd-networkd[779]: eth1: Gained carrier Jan 13 20:16:52.339381 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:52.341998 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:16:52.357609 ignition[782]: Ignition 2.20.0 Jan 13 20:16:52.357619 ignition[782]: Stage: fetch Jan 13 20:16:52.357856 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.357868 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.357972 ignition[782]: parsed url from cmdline: "" Jan 13 20:16:52.357976 ignition[782]: no config URL provided Jan 13 20:16:52.357980 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:16:52.357988 ignition[782]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:16:52.358076 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 13 20:16:52.358798 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 20:16:52.374971 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:52.390023 systemd-networkd[779]: eth0: DHCPv4 address 138.199.153.199/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:52.558992 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 13 20:16:52.563200 ignition[782]: GET result: OK Jan 13 20:16:52.563288 ignition[782]: parsing config with SHA512: 3f18d84ea90d6c49d326e76fa28a04c4d2f8b8745c1ed5d02bda96fe6203f1ec1db2c7a2dad5f4bd11a1585a58265dd779d169b95a6e882ca0fde03769cf3f4c Jan 13 20:16:52.569951 unknown[782]: fetched base config from "system" Jan 13 20:16:52.570538 unknown[782]: fetched base config from "system" Jan 13 20:16:52.570546 unknown[782]: fetched user config from "hetzner" Jan 13 20:16:52.571494 ignition[782]: fetch: fetch complete Jan 13 20:16:52.571501 ignition[782]: fetch: fetch passed Jan 13 20:16:52.573631 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:16:52.571567 ignition[782]: Ignition finished successfully Jan 13 20:16:52.582127 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:16:52.596977 ignition[790]: Ignition 2.20.0 Jan 13 20:16:52.596989 ignition[790]: Stage: kargs Jan 13 20:16:52.597187 ignition[790]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.597198 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.598441 ignition[790]: kargs: kargs passed Jan 13 20:16:52.598502 ignition[790]: Ignition finished successfully Jan 13 20:16:52.601244 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:16:52.607120 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:16:52.622491 ignition[797]: Ignition 2.20.0 Jan 13 20:16:52.622502 ignition[797]: Stage: disks Jan 13 20:16:52.622701 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:52.622711 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:52.623828 ignition[797]: disks: disks passed Jan 13 20:16:52.625976 ignition[797]: Ignition finished successfully Jan 13 20:16:52.628244 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:16:52.629011 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:52.630419 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:52.631733 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:52.633350 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:16:52.634267 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:16:52.643186 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:16:52.661374 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:16:52.668303 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:16:52.675293 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:16:52.717927 kernel: EXT4-fs (sda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:16:52.719172 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:16:52.720360 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:16:52.729014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:52.732915 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:16:52.736557 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:16:52.740686 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:16:52.740738 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:52.748630 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:16:52.752190 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (813) Jan 13 20:16:52.758116 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:52.758179 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:52.758192 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:52.758302 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:16:52.771523 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:52.771684 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:52.776705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:52.804634 coreos-metadata[815]: Jan 13 20:16:52.804 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 13 20:16:52.808355 coreos-metadata[815]: Jan 13 20:16:52.808 INFO Fetch successful Jan 13 20:16:52.810075 coreos-metadata[815]: Jan 13 20:16:52.809 INFO wrote hostname ci-4152-2-0-9-7c8f4a1e31 to /sysroot/etc/hostname Jan 13 20:16:52.812958 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:52.825896 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:16:52.832701 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:16:52.838248 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:16:52.843086 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:16:52.952415 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:52.959960 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:16:52.964390 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:16:52.972960 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:53.003902 ignition[929]: INFO : Ignition 2.20.0 Jan 13 20:16:53.003902 ignition[929]: INFO : Stage: mount Jan 13 20:16:53.003902 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:53.003902 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:53.003902 ignition[929]: INFO : mount: mount passed Jan 13 20:16:53.003902 ignition[929]: INFO : Ignition finished successfully Jan 13 20:16:53.004664 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:16:53.005650 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:16:53.011070 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:16:53.133353 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:16:53.139152 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:16:53.152004 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (942) Jan 13 20:16:53.153247 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:16:53.153299 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:16:53.153310 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:16:53.155887 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:16:53.155953 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:16:53.158934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:16:53.181442 ignition[959]: INFO : Ignition 2.20.0 Jan 13 20:16:53.182465 ignition[959]: INFO : Stage: files Jan 13 20:16:53.183288 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:53.183946 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:53.185642 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:16:53.187432 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:16:53.188332 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:16:53.192463 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:16:53.193660 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:16:53.194975 unknown[959]: wrote ssh authorized keys file for user: core Jan 13 20:16:53.195927 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:16:53.198642 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:16:53.199880 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:16:53.199880 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:53.199880 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:53.316874 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:16:53.430000 systemd-networkd[779]: eth1: Gained IPv6LL Jan 13 20:16:53.556673 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:16:53.556673 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:16:53.556673 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 13 20:16:53.942133 systemd-networkd[779]: eth0: Gained IPv6LL Jan 13 20:16:54.167297 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 13 20:16:54.294465 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 20:16:54.294465 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:16:54.297684 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:16:54.925097 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 13 20:16:56.088222 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:16:56.088222 ignition[959]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:16:56.092061 ignition[959]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:56.092061 ignition[959]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:16:56.092061 ignition[959]: INFO : files: files passed Jan 13 20:16:56.092061 ignition[959]: INFO : Ignition finished successfully Jan 13 20:16:56.094960 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:16:56.101163 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:16:56.106399 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:16:56.114089 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:16:56.122515 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:56.122515 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:56.114190 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:16:56.125960 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:16:56.127889 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:56.129076 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:16:56.133098 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:16:56.188236 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:16:56.189441 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:16:56.191106 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:16:56.191923 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:16:56.193407 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:16:56.204209 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:16:56.220041 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:56.229356 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:16:56.244237 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:16:56.244366 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:16:56.246223 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:56.247235 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:56.248501 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:16:56.249479 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:16:56.249541 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:16:56.250928 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:16:56.251934 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:16:56.252819 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:16:56.253753 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:16:56.254860 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:16:56.255948 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:16:56.256953 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:16:56.257990 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:16:56.259024 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:16:56.259977 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:16:56.260912 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:16:56.260995 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:16:56.262498 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:56.263120 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:56.264133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:16:56.264181 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:56.265216 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:16:56.265291 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:16:56.266866 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:16:56.266920 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:16:56.268246 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:16:56.268290 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:16:56.269175 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:16:56.269222 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:16:56.275094 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:16:56.275607 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:16:56.275676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:56.279936 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:16:56.280444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:16:56.280514 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:56.283077 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:16:56.283137 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:16:56.293670 ignition[1012]: INFO : Ignition 2.20.0 Jan 13 20:16:56.295016 ignition[1012]: INFO : Stage: umount Jan 13 20:16:56.295016 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:16:56.295016 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:16:56.299285 ignition[1012]: INFO : umount: umount passed Jan 13 20:16:56.299285 ignition[1012]: INFO : Ignition finished successfully Jan 13 20:16:56.303153 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:16:56.303789 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:16:56.304924 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:16:56.305833 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:16:56.306010 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:16:56.307216 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:16:56.307312 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:16:56.307991 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:16:56.308037 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:16:56.308823 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:16:56.309205 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:16:56.309788 systemd[1]: Stopped target network.target - Network. Jan 13 20:16:56.310600 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:16:56.310654 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:16:56.311931 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:16:56.312741 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:16:56.315909 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:56.316939 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:16:56.318445 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:16:56.319582 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:16:56.319641 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:16:56.320547 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:16:56.320581 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:16:56.321543 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:16:56.321594 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:16:56.322435 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:16:56.322479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:16:56.323437 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:16:56.323480 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:16:56.324534 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:16:56.325573 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:16:56.327943 systemd-networkd[779]: eth1: DHCPv6 lease lost Jan 13 20:16:56.333921 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 13 20:16:56.334659 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:16:56.335921 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:16:56.338594 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:16:56.339521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:16:56.341221 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:16:56.341280 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:56.348968 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:16:56.349467 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:16:56.349530 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:16:56.351870 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:16:56.351932 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:56.353420 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:16:56.353459 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:56.354604 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:16:56.354649 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:56.356200 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:56.370714 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:16:56.370894 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:16:56.382366 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:16:56.383725 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:56.385501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:16:56.385565 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:56.387286 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:16:56.387326 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:56.388300 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:16:56.388352 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:16:56.389949 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:16:56.390001 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:16:56.391487 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:16:56.391536 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:16:56.399167 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:16:56.402099 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:16:56.402244 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:56.405094 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:56.405164 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:56.406297 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:16:56.406414 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:16:56.407830 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:16:56.415124 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:16:56.426012 systemd[1]: Switching root. Jan 13 20:16:56.459385 systemd-journald[238]: Journal stopped Jan 13 20:16:57.474403 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 20:16:57.474511 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:16:57.474533 kernel: SELinux: policy capability open_perms=1 Jan 13 20:16:57.474543 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:16:57.474553 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:16:57.474562 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:16:57.474576 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:16:57.474585 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:16:57.474598 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:16:57.474608 kernel: audit: type=1403 audit(1736799416.715:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:16:57.474620 systemd[1]: Successfully loaded SELinux policy in 36.531ms. Jan 13 20:16:57.474648 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.002ms. Jan 13 20:16:57.474660 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:16:57.474670 systemd[1]: Detected virtualization kvm. Jan 13 20:16:57.474681 systemd[1]: Detected architecture arm64. Jan 13 20:16:57.474691 systemd[1]: Detected first boot. Jan 13 20:16:57.474701 systemd[1]: Hostname set to . Jan 13 20:16:57.474711 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:16:57.474723 zram_generator::config[1075]: No configuration found. Jan 13 20:16:57.474734 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:16:57.474755 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:16:57.474766 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:16:57.474778 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:16:57.474789 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:16:57.474799 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:16:57.474809 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:16:57.474821 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:16:57.474832 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:16:57.474947 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:16:57.474962 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:16:57.474972 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:16:57.474983 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:16:57.474994 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:16:57.475004 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:16:57.475015 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:16:57.475028 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:16:57.475039 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:16:57.475049 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:16:57.475060 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:16:57.475070 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:16:57.475081 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:16:57.475093 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:16:57.475103 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:16:57.475115 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:16:57.475125 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:16:57.475135 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:16:57.475146 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:16:57.475159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:16:57.475173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:16:57.475185 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:16:57.475196 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:16:57.475208 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:16:57.475219 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:16:57.475230 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:16:57.475240 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:16:57.475250 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:16:57.475264 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:16:57.475276 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:16:57.475288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:57.475298 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:16:57.475309 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:16:57.475319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:57.475331 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:57.475341 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:57.475353 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:16:57.475365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:57.475376 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:16:57.475391 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:16:57.475402 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:16:57.475413 kernel: fuse: init (API version 7.39) Jan 13 20:16:57.475423 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:16:57.475433 kernel: ACPI: bus type drm_connector registered Jan 13 20:16:57.475442 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:16:57.475454 kernel: loop: module loaded Jan 13 20:16:57.475465 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:16:57.475475 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:16:57.475485 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:16:57.475524 systemd-journald[1163]: Collecting audit messages is disabled. Jan 13 20:16:57.475545 systemd-journald[1163]: Journal started Jan 13 20:16:57.475569 systemd-journald[1163]: Runtime Journal (/run/log/journal/a1c511ed4e1348c9a472d41e2a6ba61a) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:16:57.479239 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:16:57.487650 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:16:57.489526 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:16:57.493074 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:16:57.494071 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:16:57.495058 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:16:57.496069 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:16:57.497194 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:16:57.498386 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:16:57.499396 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:16:57.499556 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:16:57.501295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:57.501608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:57.503247 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:57.503506 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:57.504629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:57.504928 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:57.506416 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:16:57.506714 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:16:57.507868 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:57.508260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:57.509476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:16:57.510532 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:16:57.512192 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:16:57.526446 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:16:57.534149 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:16:57.542065 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:16:57.542731 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:16:57.553088 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:16:57.558215 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:16:57.562075 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:57.579223 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:16:57.581366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:57.589476 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:16:57.605125 systemd-journald[1163]: Time spent on flushing to /var/log/journal/a1c511ed4e1348c9a472d41e2a6ba61a is 29.347ms for 1112 entries. Jan 13 20:16:57.605125 systemd-journald[1163]: System Journal (/var/log/journal/a1c511ed4e1348c9a472d41e2a6ba61a) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:16:57.653309 systemd-journald[1163]: Received client request to flush runtime journal. Jan 13 20:16:57.608145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:16:57.620252 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:16:57.621418 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:16:57.624434 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:16:57.625499 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:16:57.631297 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:16:57.646537 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:16:57.656397 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:16:57.669312 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:16:57.681581 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:16:57.682452 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 13 20:16:57.682464 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 13 20:16:57.688061 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:16:57.700062 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:16:57.731887 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:16:57.737214 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:16:57.752475 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jan 13 20:16:57.752867 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jan 13 20:16:57.759861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:16:58.154185 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:16:58.163192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:16:58.186951 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Jan 13 20:16:58.208596 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:16:58.217554 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:16:58.243096 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:16:58.291299 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 13 20:16:58.296184 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:16:58.370202 systemd-networkd[1242]: lo: Link UP Jan 13 20:16:58.370531 systemd-networkd[1242]: lo: Gained carrier Jan 13 20:16:58.372429 systemd-networkd[1242]: Enumeration completed Jan 13 20:16:58.373240 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:16:58.376037 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.376421 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:58.382258 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.382389 systemd-networkd[1242]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:16:58.383183 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:16:58.385584 systemd-networkd[1242]: eth0: Link UP Jan 13 20:16:58.387131 systemd-networkd[1242]: eth0: Gained carrier Jan 13 20:16:58.387201 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.396353 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.396393 systemd-networkd[1242]: eth1: Link UP Jan 13 20:16:58.396396 systemd-networkd[1242]: eth1: Gained carrier Jan 13 20:16:58.396405 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.419943 systemd-networkd[1242]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:16:58.425058 systemd-networkd[1242]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:16:58.432118 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1249) Jan 13 20:16:58.461968 systemd-networkd[1242]: eth0: DHCPv4 address 138.199.153.199/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:16:58.471004 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:16:58.487434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:58.499303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:58.502189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:58.515050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:58.515625 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:16:58.515669 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:16:58.516103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:58.516275 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:58.530581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:58.530818 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:58.531801 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:58.534329 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:58.542056 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:58.542945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:58.567541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:16:58.575894 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 13 20:16:58.575992 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:16:58.576010 kernel: [drm] features: -context_init Jan 13 20:16:58.575240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:58.576914 kernel: [drm] number of scanouts: 1 Jan 13 20:16:58.576980 kernel: [drm] number of cap sets: 0 Jan 13 20:16:58.584028 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 13 20:16:58.591509 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:16:58.596888 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:16:58.602807 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:16:58.603103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:58.609019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:16:58.671621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:16:58.720456 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:16:58.728106 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:16:58.749086 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:58.776377 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:16:58.778567 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:16:58.785118 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:16:58.791477 lvm[1311]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:16:58.815306 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:16:58.817473 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:16:58.819149 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:16:58.819325 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:16:58.820312 systemd[1]: Reached target machines.target - Containers. Jan 13 20:16:58.822118 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:16:58.828238 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:16:58.838079 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:16:58.839008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:58.841250 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:16:58.852148 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:16:58.859076 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:16:58.860827 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:16:58.885770 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:16:58.890078 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:16:58.897472 kernel: loop0: detected capacity change from 0 to 113536 Jan 13 20:16:58.895515 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:16:58.923922 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:16:58.941923 kernel: loop1: detected capacity change from 0 to 194512 Jan 13 20:16:58.984937 kernel: loop2: detected capacity change from 0 to 8 Jan 13 20:16:59.003511 kernel: loop3: detected capacity change from 0 to 116808 Jan 13 20:16:59.048216 kernel: loop4: detected capacity change from 0 to 113536 Jan 13 20:16:59.065987 kernel: loop5: detected capacity change from 0 to 194512 Jan 13 20:16:59.081054 kernel: loop6: detected capacity change from 0 to 8 Jan 13 20:16:59.082885 kernel: loop7: detected capacity change from 0 to 116808 Jan 13 20:16:59.098201 (sd-merge)[1333]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 13 20:16:59.099638 (sd-merge)[1333]: Merged extensions into '/usr'. Jan 13 20:16:59.104565 systemd[1]: Reloading requested from client PID 1319 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:16:59.104738 systemd[1]: Reloading... Jan 13 20:16:59.195924 zram_generator::config[1361]: No configuration found. Jan 13 20:16:59.290570 ldconfig[1315]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:16:59.326530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:59.382318 systemd[1]: Reloading finished in 277 ms. Jan 13 20:16:59.397749 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:16:59.401184 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:16:59.413243 systemd[1]: Starting ensure-sysext.service... Jan 13 20:16:59.421071 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:16:59.425289 systemd[1]: Reloading requested from client PID 1405 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:16:59.425315 systemd[1]: Reloading... Jan 13 20:16:59.451315 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:16:59.452009 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:16:59.452676 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:16:59.454424 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Jan 13 20:16:59.454870 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Jan 13 20:16:59.458323 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:59.458540 systemd-tmpfiles[1406]: Skipping /boot Jan 13 20:16:59.470192 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:16:59.470408 systemd-tmpfiles[1406]: Skipping /boot Jan 13 20:16:59.507922 zram_generator::config[1434]: No configuration found. Jan 13 20:16:59.633495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:16:59.691663 systemd[1]: Reloading finished in 266 ms. Jan 13 20:16:59.702652 systemd-networkd[1242]: eth0: Gained IPv6LL Jan 13 20:16:59.703716 systemd-networkd[1242]: eth1: Gained IPv6LL Jan 13 20:16:59.707337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:16:59.720150 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:59.726074 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:16:59.734435 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:16:59.746215 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:16:59.750259 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:16:59.752713 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:16:59.777792 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:16:59.792413 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:59.800502 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:59.812123 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:59.822135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:59.828569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:59.830615 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:16:59.833761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:59.833953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:59.846624 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:16:59.850032 augenrules[1517]: No rules Jan 13 20:16:59.850957 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:59.851162 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:59.854436 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:59.854704 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:59.861957 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:59.862160 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:59.865649 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:59.868035 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:16:59.872525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:59.881410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:59.890265 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:59.894224 systemd-resolved[1488]: Positive Trust Anchors: Jan 13 20:16:59.894657 systemd-resolved[1488]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:16:59.894784 systemd-resolved[1488]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:16:59.898263 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:59.899141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:59.902083 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:16:59.906297 systemd-resolved[1488]: Using system hostname 'ci-4152-2-0-9-7c8f4a1e31'. Jan 13 20:16:59.908589 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:16:59.912485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:16:59.915884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:59.916078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:59.917312 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:59.917471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:59.919166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:59.922567 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:59.930653 systemd[1]: Reached target network.target - Network. Jan 13 20:16:59.931642 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:16:59.932473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:16:59.938206 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:16:59.938900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:16:59.941183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:16:59.947981 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:16:59.952139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:16:59.967222 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:16:59.967981 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:16:59.968115 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:16:59.969756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:16:59.970821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:16:59.971622 augenrules[1542]: /sbin/augenrules: No change Jan 13 20:16:59.981594 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:16:59.981854 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:16:59.986753 systemd[1]: Finished ensure-sysext.service. Jan 13 20:16:59.988458 augenrules[1570]: No rules Jan 13 20:16:59.988625 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:16:59.988855 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:16:59.991112 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:16:59.994121 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:16:59.995283 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:16:59.995518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:16:59.998739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:16:59.998820 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:17:00.011167 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:17:00.065432 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:17:00.067523 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:17:00.069133 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:17:00.070226 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:17:00.070980 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:17:00.071674 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:17:00.071754 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:17:00.072357 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:17:00.073247 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:17:00.074006 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:17:00.074658 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:17:00.077495 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:17:00.080743 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:17:00.083174 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:17:00.086320 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:17:00.087144 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:17:00.087670 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:17:00.088473 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:17:00.088524 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:17:00.088557 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:17:00.092003 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:17:00.096047 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:17:00.099493 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:17:00.117058 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:17:00.122251 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:17:00.123470 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:17:00.127262 jq[1589]: false Jan 13 20:17:00.135971 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:00.142429 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:17:00.148450 dbus-daemon[1588]: [system] SELinux support is enabled Jan 13 20:17:00.149186 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:17:00.161108 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:17:00.169180 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 13 20:17:00.171780 extend-filesystems[1590]: Found loop4 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found loop5 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found loop6 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found loop7 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found sda Jan 13 20:17:00.171780 extend-filesystems[1590]: Found sda1 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found sda2 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found sda3 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found usr Jan 13 20:17:00.171780 extend-filesystems[1590]: Found sda4 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found sda6 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found sda7 Jan 13 20:17:00.171780 extend-filesystems[1590]: Found sda9 Jan 13 20:17:00.171780 extend-filesystems[1590]: Checking size of /dev/sda9 Jan 13 20:17:00.192007 coreos-metadata[1586]: Jan 13 20:17:00.177 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 13 20:17:00.192007 coreos-metadata[1586]: Jan 13 20:17:00.183 INFO Fetch successful Jan 13 20:17:00.192007 coreos-metadata[1586]: Jan 13 20:17:00.183 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 13 20:17:00.192007 coreos-metadata[1586]: Jan 13 20:17:00.184 INFO Fetch successful Jan 13 20:17:00.179913 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:17:00.662138 systemd-resolved[1488]: Clock change detected. Flushing caches. Jan 13 20:17:00.662375 systemd-timesyncd[1581]: Contacted time server 167.235.69.67:123 (0.flatcar.pool.ntp.org). Jan 13 20:17:00.662431 systemd-timesyncd[1581]: Initial clock synchronization to Mon 2025-01-13 20:17:00.662088 UTC. Jan 13 20:17:00.663152 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:17:00.686551 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:17:00.687958 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:17:00.698148 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:17:00.704440 extend-filesystems[1590]: Resized partition /dev/sda9 Jan 13 20:17:00.711553 extend-filesystems[1628]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:17:00.723067 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 13 20:17:00.713709 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:17:00.718648 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:17:00.740331 jq[1626]: true Jan 13 20:17:00.743677 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:17:00.748847 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:17:00.760450 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:17:00.760716 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:17:00.761687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:17:00.764629 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:17:00.764861 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:17:00.796850 (ntainerd)[1640]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:17:00.811902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:17:00.821475 jq[1639]: true Jan 13 20:17:00.813410 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:17:00.823830 update_engine[1621]: I20250113 20:17:00.821225 1621 main.cc:92] Flatcar Update Engine starting Jan 13 20:17:00.815261 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:17:00.815282 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:17:00.832879 update_engine[1621]: I20250113 20:17:00.832079 1621 update_check_scheduler.cc:74] Next update check in 10m19s Jan 13 20:17:00.831689 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:17:00.855672 systemd-logind[1612]: New seat seat0. Jan 13 20:17:00.866418 tar[1638]: linux-arm64/helm Jan 13 20:17:00.859703 systemd-logind[1612]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:17:00.859719 systemd-logind[1612]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 20:17:00.865109 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:17:00.867463 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:17:00.871037 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:17:00.898340 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1248) Jan 13 20:17:00.903842 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 13 20:17:00.918422 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:17:00.923952 extend-filesystems[1628]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:17:00.923952 extend-filesystems[1628]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 13 20:17:00.923952 extend-filesystems[1628]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 13 20:17:00.921697 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:17:00.931934 extend-filesystems[1590]: Resized filesystem in /dev/sda9 Jan 13 20:17:00.931934 extend-filesystems[1590]: Found sr0 Jan 13 20:17:00.927912 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:17:00.928165 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:17:00.970043 bash[1684]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:17:00.971994 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:17:00.987387 systemd[1]: Starting sshkeys.service... Jan 13 20:17:01.011694 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:17:01.020544 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:17:01.136690 coreos-metadata[1688]: Jan 13 20:17:01.136 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 13 20:17:01.142689 coreos-metadata[1688]: Jan 13 20:17:01.142 INFO Fetch successful Jan 13 20:17:01.151098 unknown[1688]: wrote ssh authorized keys file for user: core Jan 13 20:17:01.198961 update-ssh-keys[1700]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:17:01.200584 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:17:01.218552 systemd[1]: Finished sshkeys.service. Jan 13 20:17:01.249049 containerd[1640]: time="2025-01-13T20:17:01.248960243Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:17:01.281817 containerd[1640]: time="2025-01-13T20:17:01.281771603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:17:01.283627 containerd[1640]: time="2025-01-13T20:17:01.283579723Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:17:01.283781 containerd[1640]: time="2025-01-13T20:17:01.283764883Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:17:01.283838 containerd[1640]: time="2025-01-13T20:17:01.283826523Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:17:01.284061 containerd[1640]: time="2025-01-13T20:17:01.284042523Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:17:01.284195 containerd[1640]: time="2025-01-13T20:17:01.284167403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:17:01.284407 containerd[1640]: time="2025-01-13T20:17:01.284384323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:17:01.284487 containerd[1640]: time="2025-01-13T20:17:01.284472723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:17:01.284941 containerd[1640]: time="2025-01-13T20:17:01.284917803Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:17:01.285130 containerd[1640]: time="2025-01-13T20:17:01.285110803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:17:01.285372 containerd[1640]: time="2025-01-13T20:17:01.285350203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:17:01.285556 containerd[1640]: time="2025-01-13T20:17:01.285479243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:17:01.285949 containerd[1640]: time="2025-01-13T20:17:01.285927403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:17:01.286488 containerd[1640]: time="2025-01-13T20:17:01.286463243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:17:01.286791 containerd[1640]: time="2025-01-13T20:17:01.286768643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:17:01.286931 containerd[1640]: time="2025-01-13T20:17:01.286912043Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:17:01.287094 containerd[1640]: time="2025-01-13T20:17:01.287076603Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:17:01.287285 containerd[1640]: time="2025-01-13T20:17:01.287266803Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:17:01.294453 containerd[1640]: time="2025-01-13T20:17:01.294406523Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:17:01.295078 containerd[1640]: time="2025-01-13T20:17:01.294651403Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:17:01.295078 containerd[1640]: time="2025-01-13T20:17:01.294676163Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:17:01.295078 containerd[1640]: time="2025-01-13T20:17:01.294693483Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:17:01.295078 containerd[1640]: time="2025-01-13T20:17:01.294707603Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:17:01.295078 containerd[1640]: time="2025-01-13T20:17:01.294878803Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:17:01.296027 containerd[1640]: time="2025-01-13T20:17:01.295988643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:17:01.296402 containerd[1640]: time="2025-01-13T20:17:01.296380603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:17:01.296493 containerd[1640]: time="2025-01-13T20:17:01.296478963Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:17:01.296598 containerd[1640]: time="2025-01-13T20:17:01.296582483Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:17:01.296820 containerd[1640]: time="2025-01-13T20:17:01.296654043Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:17:01.296820 containerd[1640]: time="2025-01-13T20:17:01.296674843Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:17:01.296820 containerd[1640]: time="2025-01-13T20:17:01.296687803Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:17:01.296820 containerd[1640]: time="2025-01-13T20:17:01.296703083Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:17:01.296820 containerd[1640]: time="2025-01-13T20:17:01.296728923Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:17:01.296820 containerd[1640]: time="2025-01-13T20:17:01.296744523Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:17:01.296820 containerd[1640]: time="2025-01-13T20:17:01.296757403Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:17:01.296975 containerd[1640]: time="2025-01-13T20:17:01.296769603Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:17:01.297039 containerd[1640]: time="2025-01-13T20:17:01.297026283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.297095 containerd[1640]: time="2025-01-13T20:17:01.297083603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.297155 containerd[1640]: time="2025-01-13T20:17:01.297142403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297229163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297251723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297267123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297278163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297292363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297326563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297344283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297357763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297369443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297384043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297398643Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297422883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297435043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299328 containerd[1640]: time="2025-01-13T20:17:01.297445763Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:17:01.299799 containerd[1640]: time="2025-01-13T20:17:01.297619563Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:17:01.299799 containerd[1640]: time="2025-01-13T20:17:01.297639243Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:17:01.299799 containerd[1640]: time="2025-01-13T20:17:01.297695203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:17:01.299799 containerd[1640]: time="2025-01-13T20:17:01.297712723Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:17:01.299799 containerd[1640]: time="2025-01-13T20:17:01.297722563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299799 containerd[1640]: time="2025-01-13T20:17:01.297734643Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:17:01.299799 containerd[1640]: time="2025-01-13T20:17:01.297745483Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:17:01.299799 containerd[1640]: time="2025-01-13T20:17:01.297755243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:17:01.299997 containerd[1640]: time="2025-01-13T20:17:01.298101363Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:17:01.299997 containerd[1640]: time="2025-01-13T20:17:01.298147843Z" level=info msg="Connect containerd service" Jan 13 20:17:01.299997 containerd[1640]: time="2025-01-13T20:17:01.298195683Z" level=info msg="using legacy CRI server" Jan 13 20:17:01.299997 containerd[1640]: time="2025-01-13T20:17:01.298204003Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:17:01.299997 containerd[1640]: time="2025-01-13T20:17:01.298452683Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:17:01.299997 containerd[1640]: time="2025-01-13T20:17:01.299086363Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:17:01.300913 containerd[1640]: time="2025-01-13T20:17:01.300872323Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:17:01.301093 containerd[1640]: time="2025-01-13T20:17:01.301055563Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:17:01.301433 containerd[1640]: time="2025-01-13T20:17:01.301401963Z" level=info msg="Start subscribing containerd event" Jan 13 20:17:01.301515 containerd[1640]: time="2025-01-13T20:17:01.301502003Z" level=info msg="Start recovering state" Jan 13 20:17:01.301620 containerd[1640]: time="2025-01-13T20:17:01.301608203Z" level=info msg="Start event monitor" Jan 13 20:17:01.301690 containerd[1640]: time="2025-01-13T20:17:01.301677603Z" level=info msg="Start snapshots syncer" Jan 13 20:17:01.301783 containerd[1640]: time="2025-01-13T20:17:01.301770363Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:17:01.301891 containerd[1640]: time="2025-01-13T20:17:01.301874763Z" level=info msg="Start streaming server" Jan 13 20:17:01.302128 containerd[1640]: time="2025-01-13T20:17:01.302094723Z" level=info msg="containerd successfully booted in 0.054588s" Jan 13 20:17:01.302245 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:17:01.320385 locksmithd[1661]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:17:01.703954 sshd_keygen[1630]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:17:01.727390 tar[1638]: linux-arm64/LICENSE Jan 13 20:17:01.727506 tar[1638]: linux-arm64/README.md Jan 13 20:17:01.738749 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:17:01.750687 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:17:01.751927 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:17:01.768752 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:17:01.769638 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:17:01.783797 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:17:01.800779 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:17:01.811018 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:17:01.817026 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:17:01.820907 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:17:01.876615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:01.878606 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:17:01.880063 systemd[1]: Startup finished in 7.825s (kernel) + 4.731s (userspace) = 12.556s. Jan 13 20:17:01.880875 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:02.551041 kubelet[1744]: E0113 20:17:02.550959 1744 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:02.554791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:02.554979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:12.806007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:17:12.813676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:12.921549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:12.936128 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:12.990811 kubelet[1769]: E0113 20:17:12.990735 1769 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:12.994454 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:12.994791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:23.099143 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:17:23.109794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:23.223490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:23.238989 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:23.294934 kubelet[1790]: E0113 20:17:23.294867 1790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:23.297443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:23.297646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:33.349649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:17:33.362564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:33.477537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:33.479125 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:33.534610 kubelet[1811]: E0113 20:17:33.534528 1811 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:33.537921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:33.538175 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:43.599442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:17:43.606797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:43.730561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:43.732875 (kubelet)[1832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:43.785522 kubelet[1832]: E0113 20:17:43.785425 1832 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:43.788837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:43.789049 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:17:46.369572 update_engine[1621]: I20250113 20:17:46.369400 1621 update_attempter.cc:509] Updating boot flags... Jan 13 20:17:46.423327 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1850) Jan 13 20:17:46.472401 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1854) Jan 13 20:17:53.848966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:17:53.856660 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:17:53.979583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:17:53.989141 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:17:54.044373 kubelet[1871]: E0113 20:17:54.044282 1871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:17:54.046926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:17:54.047065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:04.099126 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 13 20:18:04.104525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:04.223887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:04.235485 (kubelet)[1891]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:04.296804 kubelet[1891]: E0113 20:18:04.296736 1891 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:04.300225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:04.300709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:14.349054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 13 20:18:14.356656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:14.474564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:14.484593 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:14.543644 kubelet[1912]: E0113 20:18:14.543570 1912 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:14.547461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:14.547656 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:24.600694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 13 20:18:24.610761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:24.739618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:24.740720 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:24.797559 kubelet[1933]: E0113 20:18:24.797407 1933 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:24.800613 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:24.800932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:34.849691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 13 20:18:34.860546 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:34.984452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:34.996103 (kubelet)[1955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:35.053944 kubelet[1955]: E0113 20:18:35.053864 1955 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:35.056451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:35.056626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:45.099477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 13 20:18:45.106533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:45.227017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:45.232664 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:45.282446 kubelet[1976]: E0113 20:18:45.282382 1976 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:45.285277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:45.285562 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:45.649939 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:18:45.660267 systemd[1]: Started sshd@0-138.199.153.199:22-147.75.109.163:39298.service - OpenSSH per-connection server daemon (147.75.109.163:39298). Jan 13 20:18:46.670658 sshd[1986]: Accepted publickey for core from 147.75.109.163 port 39298 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:46.673415 sshd-session[1986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:46.686446 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:18:46.694791 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:18:46.699784 systemd-logind[1612]: New session 1 of user core. Jan 13 20:18:46.708026 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:18:46.715738 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:18:46.720981 (systemd)[1992]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:18:46.819883 systemd[1992]: Queued start job for default target default.target. Jan 13 20:18:46.820959 systemd[1992]: Created slice app.slice - User Application Slice. Jan 13 20:18:46.821099 systemd[1992]: Reached target paths.target - Paths. Jan 13 20:18:46.821179 systemd[1992]: Reached target timers.target - Timers. Jan 13 20:18:46.830081 systemd[1992]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:18:46.838730 systemd[1992]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:18:46.839102 systemd[1992]: Reached target sockets.target - Sockets. Jan 13 20:18:46.839141 systemd[1992]: Reached target basic.target - Basic System. Jan 13 20:18:46.839328 systemd[1992]: Reached target default.target - Main User Target. Jan 13 20:18:46.839425 systemd[1992]: Startup finished in 111ms. Jan 13 20:18:46.840125 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:18:46.849496 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:18:47.538607 systemd[1]: Started sshd@1-138.199.153.199:22-147.75.109.163:34266.service - OpenSSH per-connection server daemon (147.75.109.163:34266). Jan 13 20:18:48.529920 sshd[2004]: Accepted publickey for core from 147.75.109.163 port 34266 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:48.531715 sshd-session[2004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:48.538506 systemd-logind[1612]: New session 2 of user core. Jan 13 20:18:48.544679 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:18:49.206888 sshd[2007]: Connection closed by 147.75.109.163 port 34266 Jan 13 20:18:49.207550 sshd-session[2004]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:49.211697 systemd[1]: sshd@1-138.199.153.199:22-147.75.109.163:34266.service: Deactivated successfully. Jan 13 20:18:49.215669 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:18:49.216686 systemd-logind[1612]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:18:49.218006 systemd-logind[1612]: Removed session 2. Jan 13 20:18:49.378019 systemd[1]: Started sshd@2-138.199.153.199:22-147.75.109.163:34270.service - OpenSSH per-connection server daemon (147.75.109.163:34270). Jan 13 20:18:50.374627 sshd[2012]: Accepted publickey for core from 147.75.109.163 port 34270 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:50.376881 sshd-session[2012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:50.382672 systemd-logind[1612]: New session 3 of user core. Jan 13 20:18:50.392837 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:18:51.061395 sshd[2015]: Connection closed by 147.75.109.163 port 34270 Jan 13 20:18:51.062369 sshd-session[2012]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:51.067346 systemd[1]: sshd@2-138.199.153.199:22-147.75.109.163:34270.service: Deactivated successfully. Jan 13 20:18:51.071717 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:18:51.072548 systemd-logind[1612]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:18:51.073742 systemd-logind[1612]: Removed session 3. Jan 13 20:18:51.233117 systemd[1]: Started sshd@3-138.199.153.199:22-147.75.109.163:34286.service - OpenSSH per-connection server daemon (147.75.109.163:34286). Jan 13 20:18:52.233148 sshd[2020]: Accepted publickey for core from 147.75.109.163 port 34286 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:52.236021 sshd-session[2020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:52.241585 systemd-logind[1612]: New session 4 of user core. Jan 13 20:18:52.253016 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:18:52.920392 sshd[2023]: Connection closed by 147.75.109.163 port 34286 Jan 13 20:18:52.921262 sshd-session[2020]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:52.928681 systemd[1]: sshd@3-138.199.153.199:22-147.75.109.163:34286.service: Deactivated successfully. Jan 13 20:18:52.932023 systemd-logind[1612]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:18:52.932333 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:18:52.933638 systemd-logind[1612]: Removed session 4. Jan 13 20:18:53.088704 systemd[1]: Started sshd@4-138.199.153.199:22-147.75.109.163:34292.service - OpenSSH per-connection server daemon (147.75.109.163:34292). Jan 13 20:18:54.083749 sshd[2028]: Accepted publickey for core from 147.75.109.163 port 34292 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:54.086154 sshd-session[2028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:54.092290 systemd-logind[1612]: New session 5 of user core. Jan 13 20:18:54.097852 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:18:54.621437 sudo[2032]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:18:54.621720 sudo[2032]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:54.639191 sudo[2032]: pam_unix(sudo:session): session closed for user root Jan 13 20:18:54.800086 sshd[2031]: Connection closed by 147.75.109.163 port 34292 Jan 13 20:18:54.801264 sshd-session[2028]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:54.807399 systemd[1]: sshd@4-138.199.153.199:22-147.75.109.163:34292.service: Deactivated successfully. Jan 13 20:18:54.810991 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:18:54.812205 systemd-logind[1612]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:18:54.813895 systemd-logind[1612]: Removed session 5. Jan 13 20:18:54.962768 systemd[1]: Started sshd@5-138.199.153.199:22-147.75.109.163:34296.service - OpenSSH per-connection server daemon (147.75.109.163:34296). Jan 13 20:18:55.348918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 13 20:18:55.356825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:18:55.487504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:18:55.499045 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:18:55.553606 kubelet[2050]: E0113 20:18:55.553534 2050 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:18:55.555953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:18:55.556103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:18:55.952215 sshd[2037]: Accepted publickey for core from 147.75.109.163 port 34296 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:55.954918 sshd-session[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:55.960859 systemd-logind[1612]: New session 6 of user core. Jan 13 20:18:55.971889 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:18:56.468189 sudo[2063]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:18:56.469043 sudo[2063]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:56.473139 sudo[2063]: pam_unix(sudo:session): session closed for user root Jan 13 20:18:56.479078 sudo[2062]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:18:56.479589 sudo[2062]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:56.498366 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:18:56.527807 augenrules[2085]: No rules Jan 13 20:18:56.528745 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:18:56.528994 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:18:56.531850 sudo[2062]: pam_unix(sudo:session): session closed for user root Jan 13 20:18:56.690415 sshd[2061]: Connection closed by 147.75.109.163 port 34296 Jan 13 20:18:56.691370 sshd-session[2037]: pam_unix(sshd:session): session closed for user core Jan 13 20:18:56.697577 systemd[1]: sshd@5-138.199.153.199:22-147.75.109.163:34296.service: Deactivated successfully. Jan 13 20:18:56.702054 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:18:56.702481 systemd-logind[1612]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:18:56.703755 systemd-logind[1612]: Removed session 6. Jan 13 20:18:56.859567 systemd[1]: Started sshd@6-138.199.153.199:22-147.75.109.163:34312.service - OpenSSH per-connection server daemon (147.75.109.163:34312). Jan 13 20:18:57.850411 sshd[2094]: Accepted publickey for core from 147.75.109.163 port 34312 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:18:57.852055 sshd-session[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:18:57.857377 systemd-logind[1612]: New session 7 of user core. Jan 13 20:18:57.863886 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:18:58.377176 sudo[2098]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:18:58.377885 sudo[2098]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:18:58.684746 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:18:58.685007 (dockerd)[2117]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:18:58.927077 dockerd[2117]: time="2025-01-13T20:18:58.926998653Z" level=info msg="Starting up" Jan 13 20:18:59.004740 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport537999263-merged.mount: Deactivated successfully. Jan 13 20:18:59.028046 dockerd[2117]: time="2025-01-13T20:18:59.027759796Z" level=info msg="Loading containers: start." Jan 13 20:18:59.187363 kernel: Initializing XFRM netlink socket Jan 13 20:18:59.275919 systemd-networkd[1242]: docker0: Link UP Jan 13 20:18:59.318025 dockerd[2117]: time="2025-01-13T20:18:59.317908779Z" level=info msg="Loading containers: done." Jan 13 20:18:59.334783 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3248047289-merged.mount: Deactivated successfully. Jan 13 20:18:59.336190 dockerd[2117]: time="2025-01-13T20:18:59.335568622Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:18:59.336190 dockerd[2117]: time="2025-01-13T20:18:59.335721863Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:18:59.336190 dockerd[2117]: time="2025-01-13T20:18:59.335881584Z" level=info msg="Daemon has completed initialization" Jan 13 20:18:59.383831 dockerd[2117]: time="2025-01-13T20:18:59.383652623Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:18:59.384652 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:19:00.798961 containerd[1640]: time="2025-01-13T20:19:00.798455307Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:19:01.482739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822740347.mount: Deactivated successfully. Jan 13 20:19:02.502476 containerd[1640]: time="2025-01-13T20:19:02.502406273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:02.503670 containerd[1640]: time="2025-01-13T20:19:02.503378361Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201342" Jan 13 20:19:02.505106 containerd[1640]: time="2025-01-13T20:19:02.504989135Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:02.509199 containerd[1640]: time="2025-01-13T20:19:02.509114090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:02.511292 containerd[1640]: time="2025-01-13T20:19:02.511220388Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.712646919s" Jan 13 20:19:02.511687 containerd[1640]: time="2025-01-13T20:19:02.511476950Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:19:02.541431 containerd[1640]: time="2025-01-13T20:19:02.541360924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:19:03.785368 containerd[1640]: time="2025-01-13T20:19:03.784103496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:03.786997 containerd[1640]: time="2025-01-13T20:19:03.786941439Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381317" Jan 13 20:19:03.788421 containerd[1640]: time="2025-01-13T20:19:03.788382651Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:03.792031 containerd[1640]: time="2025-01-13T20:19:03.791991041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:03.794350 containerd[1640]: time="2025-01-13T20:19:03.794291500Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.252868816s" Jan 13 20:19:03.794494 containerd[1640]: time="2025-01-13T20:19:03.794478782Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:19:03.819477 containerd[1640]: time="2025-01-13T20:19:03.819429589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:19:04.817432 containerd[1640]: time="2025-01-13T20:19:04.817227584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.819636 containerd[1640]: time="2025-01-13T20:19:04.819560563Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765660" Jan 13 20:19:04.823320 containerd[1640]: time="2025-01-13T20:19:04.821879742Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.829606 containerd[1640]: time="2025-01-13T20:19:04.829552964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:04.831092 containerd[1640]: time="2025-01-13T20:19:04.831044896Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.011294584s" Jan 13 20:19:04.831238 containerd[1640]: time="2025-01-13T20:19:04.831221137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:19:04.855844 containerd[1640]: time="2025-01-13T20:19:04.855810457Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:19:05.599802 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 13 20:19:05.610923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:05.769604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:05.770096 (kubelet)[2400]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:05.820732 kubelet[2400]: E0113 20:19:05.820634 2400 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:05.824141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:05.826462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:05.935907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213188742.mount: Deactivated successfully. Jan 13 20:19:06.235432 containerd[1640]: time="2025-01-13T20:19:06.235191978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.236774 containerd[1640]: time="2025-01-13T20:19:06.236679029Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25274003" Jan 13 20:19:06.238059 containerd[1640]: time="2025-01-13T20:19:06.238005119Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.240567 containerd[1640]: time="2025-01-13T20:19:06.240495498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:06.241824 containerd[1640]: time="2025-01-13T20:19:06.241546507Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.385423728s" Jan 13 20:19:06.241824 containerd[1640]: time="2025-01-13T20:19:06.241590227Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:19:06.269422 containerd[1640]: time="2025-01-13T20:19:06.269386601Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:19:06.766047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379780685.mount: Deactivated successfully. Jan 13 20:19:07.393786 containerd[1640]: time="2025-01-13T20:19:07.392593827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.395591 containerd[1640]: time="2025-01-13T20:19:07.395536809Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 13 20:19:07.397157 containerd[1640]: time="2025-01-13T20:19:07.397080261Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.401351 containerd[1640]: time="2025-01-13T20:19:07.401035411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.403055 containerd[1640]: time="2025-01-13T20:19:07.402828264Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.133220581s" Jan 13 20:19:07.403055 containerd[1640]: time="2025-01-13T20:19:07.402868105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:19:07.426797 containerd[1640]: time="2025-01-13T20:19:07.426735524Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:19:07.971911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1546760635.mount: Deactivated successfully. Jan 13 20:19:07.978495 containerd[1640]: time="2025-01-13T20:19:07.978431275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.980334 containerd[1640]: time="2025-01-13T20:19:07.980149968Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 13 20:19:07.981863 containerd[1640]: time="2025-01-13T20:19:07.981802740Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.985830 containerd[1640]: time="2025-01-13T20:19:07.985767690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:07.987213 containerd[1640]: time="2025-01-13T20:19:07.986618617Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 559.840293ms" Jan 13 20:19:07.987213 containerd[1640]: time="2025-01-13T20:19:07.986660297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:19:08.010985 containerd[1640]: time="2025-01-13T20:19:08.010719316Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:19:08.618428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2369077839.mount: Deactivated successfully. Jan 13 20:19:10.465900 containerd[1640]: time="2025-01-13T20:19:10.465832457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.467452 containerd[1640]: time="2025-01-13T20:19:10.467259827Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Jan 13 20:19:10.469261 containerd[1640]: time="2025-01-13T20:19:10.468353435Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.473105 containerd[1640]: time="2025-01-13T20:19:10.473065428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:10.475341 containerd[1640]: time="2025-01-13T20:19:10.475227243Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.464468127s" Jan 13 20:19:10.475341 containerd[1640]: time="2025-01-13T20:19:10.475323643Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:19:15.850158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 13 20:19:15.861197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:15.998447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:16.013014 (kubelet)[2594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:19:16.066312 kubelet[2594]: E0113 20:19:16.065497 2594 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:19:16.069529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:19:16.069704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:19:16.924032 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:16.934764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:16.962005 systemd[1]: Reloading requested from client PID 2610 ('systemctl') (unit session-7.scope)... Jan 13 20:19:16.962169 systemd[1]: Reloading... Jan 13 20:19:17.093327 zram_generator::config[2660]: No configuration found. Jan 13 20:19:17.187442 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:17.251078 systemd[1]: Reloading finished in 288 ms. Jan 13 20:19:17.301838 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:19:17.301958 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:19:17.302716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:17.318936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:17.435567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:17.446794 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:17.502867 kubelet[2711]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:17.502867 kubelet[2711]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:17.502867 kubelet[2711]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:17.503249 kubelet[2711]: I0113 20:19:17.502911 2711 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:18.259059 kubelet[2711]: I0113 20:19:18.259004 2711 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:19:18.259059 kubelet[2711]: I0113 20:19:18.259055 2711 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:18.259556 kubelet[2711]: I0113 20:19:18.259516 2711 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:19:18.284335 kubelet[2711]: E0113 20:19:18.284215 2711 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.153.199:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.286348 kubelet[2711]: I0113 20:19:18.285858 2711 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:18.295661 kubelet[2711]: I0113 20:19:18.295624 2711 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:18.297522 kubelet[2711]: I0113 20:19:18.297483 2711 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:18.297882 kubelet[2711]: I0113 20:19:18.297859 2711 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:19:18.298362 kubelet[2711]: I0113 20:19:18.297980 2711 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:18.298362 kubelet[2711]: I0113 20:19:18.297996 2711 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:19:18.298362 kubelet[2711]: I0113 20:19:18.298118 2711 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:18.300831 kubelet[2711]: I0113 20:19:18.300802 2711 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:19:18.300936 kubelet[2711]: I0113 20:19:18.300925 2711 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:18.301070 kubelet[2711]: I0113 20:19:18.300995 2711 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:19:18.301070 kubelet[2711]: I0113 20:19:18.301012 2711 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:18.302350 kubelet[2711]: W0113 20:19:18.301657 2711 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.153.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-9-7c8f4a1e31&limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.302350 kubelet[2711]: E0113 20:19:18.301759 2711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.153.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-9-7c8f4a1e31&limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.303639 kubelet[2711]: I0113 20:19:18.303619 2711 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:18.304332 kubelet[2711]: I0113 20:19:18.304314 2711 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:18.304552 kubelet[2711]: W0113 20:19:18.304540 2711 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:19:18.306159 kubelet[2711]: I0113 20:19:18.305487 2711 server.go:1256] "Started kubelet" Jan 13 20:19:18.306159 kubelet[2711]: W0113 20:19:18.305600 2711 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.153.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.306159 kubelet[2711]: E0113 20:19:18.305641 2711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.153.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.311392 kubelet[2711]: I0113 20:19:18.311354 2711 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:18.311642 kubelet[2711]: E0113 20:19:18.311605 2711 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.153.199:6443/api/v1/namespaces/default/events\": dial tcp 138.199.153.199:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-9-7c8f4a1e31.181a59ff0eda33d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-9-7c8f4a1e31,UID:ci-4152-2-0-9-7c8f4a1e31,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-9-7c8f4a1e31,},FirstTimestamp:2025-01-13 20:19:18.305461205 +0000 UTC m=+0.854302298,LastTimestamp:2025-01-13 20:19:18.305461205 +0000 UTC m=+0.854302298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-9-7c8f4a1e31,}" Jan 13 20:19:18.316681 kubelet[2711]: I0113 20:19:18.316626 2711 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:18.317587 kubelet[2711]: I0113 20:19:18.317554 2711 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:19:18.318630 kubelet[2711]: I0113 20:19:18.318602 2711 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:18.318874 kubelet[2711]: I0113 20:19:18.318847 2711 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:18.319540 kubelet[2711]: I0113 20:19:18.319510 2711 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:19:18.319607 kubelet[2711]: I0113 20:19:18.319600 2711 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:19:18.319642 kubelet[2711]: I0113 20:19:18.319635 2711 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:19:18.320268 kubelet[2711]: W0113 20:19:18.320199 2711 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.153.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.320268 kubelet[2711]: E0113 20:19:18.320267 2711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.153.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.321138 kubelet[2711]: E0113 20:19:18.320738 2711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-9-7c8f4a1e31?timeout=10s\": dial tcp 138.199.153.199:6443: connect: connection refused" interval="200ms" Jan 13 20:19:18.321138 kubelet[2711]: E0113 20:19:18.321094 2711 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:18.322965 kubelet[2711]: I0113 20:19:18.322928 2711 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:18.322965 kubelet[2711]: I0113 20:19:18.322954 2711 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:18.323059 kubelet[2711]: I0113 20:19:18.323036 2711 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:18.341922 kubelet[2711]: I0113 20:19:18.341891 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:18.343236 kubelet[2711]: I0113 20:19:18.343210 2711 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:18.343236 kubelet[2711]: I0113 20:19:18.343236 2711 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:18.343387 kubelet[2711]: I0113 20:19:18.343255 2711 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:19:18.343412 kubelet[2711]: E0113 20:19:18.343388 2711 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:18.351118 kubelet[2711]: W0113 20:19:18.351057 2711 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.153.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.351118 kubelet[2711]: E0113 20:19:18.351115 2711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.153.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:18.356629 kubelet[2711]: I0113 20:19:18.356597 2711 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:18.357121 kubelet[2711]: I0113 20:19:18.356825 2711 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:18.357121 kubelet[2711]: I0113 20:19:18.356852 2711 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:18.360049 kubelet[2711]: I0113 20:19:18.359893 2711 policy_none.go:49] "None policy: Start" Jan 13 20:19:18.361139 kubelet[2711]: I0113 20:19:18.360735 2711 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:18.361139 kubelet[2711]: I0113 20:19:18.360786 2711 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:18.368910 kubelet[2711]: I0113 20:19:18.368866 2711 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:18.370360 kubelet[2711]: I0113 20:19:18.369617 2711 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:18.373028 kubelet[2711]: E0113 20:19:18.373002 2711 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:18.423049 kubelet[2711]: I0113 20:19:18.423017 2711 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.423669 kubelet[2711]: E0113 20:19:18.423638 2711 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.199:6443/api/v1/nodes\": dial tcp 138.199.153.199:6443: connect: connection refused" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.444028 kubelet[2711]: I0113 20:19:18.443937 2711 topology_manager.go:215] "Topology Admit Handler" podUID="47948ca4d7dadaa025bba12dd9bef658" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.446813 kubelet[2711]: I0113 20:19:18.446492 2711 topology_manager.go:215] "Topology Admit Handler" podUID="a8e6523ac9e40df6e08120cce888d3c0" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.449596 kubelet[2711]: I0113 20:19:18.449206 2711 topology_manager.go:215] "Topology Admit Handler" podUID="f089cf54e9bae2b7b61ad719b15d6e53" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.521039 kubelet[2711]: I0113 20:19:18.520870 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.521039 kubelet[2711]: I0113 20:19:18.520937 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.521039 kubelet[2711]: I0113 20:19:18.520978 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.521039 kubelet[2711]: I0113 20:19:18.521027 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.522526 kubelet[2711]: I0113 20:19:18.521066 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8e6523ac9e40df6e08120cce888d3c0-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"a8e6523ac9e40df6e08120cce888d3c0\") " pod="kube-system/kube-scheduler-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.522526 kubelet[2711]: I0113 20:19:18.521102 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f089cf54e9bae2b7b61ad719b15d6e53-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"f089cf54e9bae2b7b61ad719b15d6e53\") " pod="kube-system/kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.522526 kubelet[2711]: I0113 20:19:18.521160 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.522526 kubelet[2711]: I0113 20:19:18.521199 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f089cf54e9bae2b7b61ad719b15d6e53-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"f089cf54e9bae2b7b61ad719b15d6e53\") " pod="kube-system/kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.522526 kubelet[2711]: I0113 20:19:18.521245 2711 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f089cf54e9bae2b7b61ad719b15d6e53-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"f089cf54e9bae2b7b61ad719b15d6e53\") " pod="kube-system/kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.522648 kubelet[2711]: E0113 20:19:18.521890 2711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-9-7c8f4a1e31?timeout=10s\": dial tcp 138.199.153.199:6443: connect: connection refused" interval="400ms" Jan 13 20:19:18.626946 kubelet[2711]: I0113 20:19:18.626894 2711 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.627505 kubelet[2711]: E0113 20:19:18.627377 2711 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.199:6443/api/v1/nodes\": dial tcp 138.199.153.199:6443: connect: connection refused" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:18.753748 containerd[1640]: time="2025-01-13T20:19:18.753344139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31,Uid:47948ca4d7dadaa025bba12dd9bef658,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:18.758227 containerd[1640]: time="2025-01-13T20:19:18.758125087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-9-7c8f4a1e31,Uid:a8e6523ac9e40df6e08120cce888d3c0,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:18.759165 containerd[1640]: time="2025-01-13T20:19:18.759006012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-9-7c8f4a1e31,Uid:f089cf54e9bae2b7b61ad719b15d6e53,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:18.922960 kubelet[2711]: E0113 20:19:18.922909 2711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-9-7c8f4a1e31?timeout=10s\": dial tcp 138.199.153.199:6443: connect: connection refused" interval="800ms" Jan 13 20:19:19.030607 kubelet[2711]: I0113 20:19:19.030100 2711 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:19.030607 kubelet[2711]: E0113 20:19:19.030563 2711 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.153.199:6443/api/v1/nodes\": dial tcp 138.199.153.199:6443: connect: connection refused" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:19.224658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463760532.mount: Deactivated successfully. Jan 13 20:19:19.232178 containerd[1640]: time="2025-01-13T20:19:19.232089063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.235815 containerd[1640]: time="2025-01-13T20:19:19.235715444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 13 20:19:19.235991 kubelet[2711]: W0113 20:19:19.235810 2711 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.153.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:19.235991 kubelet[2711]: E0113 20:19:19.235859 2711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.153.199:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:19.240338 containerd[1640]: time="2025-01-13T20:19:19.239084183Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.240899 containerd[1640]: time="2025-01-13T20:19:19.240823673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:19.243204 containerd[1640]: time="2025-01-13T20:19:19.243159687Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.244421 containerd[1640]: time="2025-01-13T20:19:19.244383134Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.244699 containerd[1640]: time="2025-01-13T20:19:19.244663055Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:19:19.247425 containerd[1640]: time="2025-01-13T20:19:19.247377311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:19:19.248959 containerd[1640]: time="2025-01-13T20:19:19.248903959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.836547ms" Jan 13 20:19:19.252153 containerd[1640]: time="2025-01-13T20:19:19.252106298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.876491ms" Jan 13 20:19:19.257994 containerd[1640]: time="2025-01-13T20:19:19.257915531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.457752ms" Jan 13 20:19:19.265582 kubelet[2711]: W0113 20:19:19.265476 2711 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.153.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:19.265582 kubelet[2711]: E0113 20:19:19.265547 2711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.153.199:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:19.400954 containerd[1640]: time="2025-01-13T20:19:19.400517545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:19.400954 containerd[1640]: time="2025-01-13T20:19:19.400601385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:19.400954 containerd[1640]: time="2025-01-13T20:19:19.400616906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.402488 containerd[1640]: time="2025-01-13T20:19:19.402233035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.403077 containerd[1640]: time="2025-01-13T20:19:19.402400596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:19.403077 containerd[1640]: time="2025-01-13T20:19:19.402845678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:19.403077 containerd[1640]: time="2025-01-13T20:19:19.402856478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.403077 containerd[1640]: time="2025-01-13T20:19:19.402955319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.403759 containerd[1640]: time="2025-01-13T20:19:19.403623083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:19.404075 containerd[1640]: time="2025-01-13T20:19:19.403936805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:19.404075 containerd[1640]: time="2025-01-13T20:19:19.403961045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.404512 containerd[1640]: time="2025-01-13T20:19:19.404262486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:19.482442 containerd[1640]: time="2025-01-13T20:19:19.482245332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-9-7c8f4a1e31,Uid:f089cf54e9bae2b7b61ad719b15d6e53,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed443c043ec936e641fa55e20109b34969f2b43e8039ab1d12837aa8221b6f0\"" Jan 13 20:19:19.489337 containerd[1640]: time="2025-01-13T20:19:19.489072851Z" level=info msg="CreateContainer within sandbox \"aed443c043ec936e641fa55e20109b34969f2b43e8039ab1d12837aa8221b6f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:19:19.496954 containerd[1640]: time="2025-01-13T20:19:19.496907375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31,Uid:47948ca4d7dadaa025bba12dd9bef658,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6a5c4b5848bc4d01e606b1502a8b3afddb327c9d157ebaa7c2195b790abf857\"" Jan 13 20:19:19.500135 containerd[1640]: time="2025-01-13T20:19:19.500099634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-9-7c8f4a1e31,Uid:a8e6523ac9e40df6e08120cce888d3c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"91830ad3c9856bacbf4587c40b5cd02552b91afa08156a815af4b2ab759b9245\"" Jan 13 20:19:19.501963 containerd[1640]: time="2025-01-13T20:19:19.501838883Z" level=info msg="CreateContainer within sandbox \"c6a5c4b5848bc4d01e606b1502a8b3afddb327c9d157ebaa7c2195b790abf857\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:19:19.505583 containerd[1640]: time="2025-01-13T20:19:19.505545865Z" level=info msg="CreateContainer within sandbox \"aed443c043ec936e641fa55e20109b34969f2b43e8039ab1d12837aa8221b6f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d32bd557b8428bf0b33d54decd3f33282c015fa29d91aa1b1b864e7bddf79203\"" Jan 13 20:19:19.517137 containerd[1640]: time="2025-01-13T20:19:19.517096451Z" level=info msg="StartContainer for \"d32bd557b8428bf0b33d54decd3f33282c015fa29d91aa1b1b864e7bddf79203\"" Jan 13 20:19:19.519402 containerd[1640]: time="2025-01-13T20:19:19.518719980Z" level=info msg="CreateContainer within sandbox \"c6a5c4b5848bc4d01e606b1502a8b3afddb327c9d157ebaa7c2195b790abf857\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4918dbcbbea9a0f6372dc516dd4d723c300f0c3a1a056165ee69cd93c9aaf44e\"" Jan 13 20:19:19.519913 containerd[1640]: time="2025-01-13T20:19:19.519868226Z" level=info msg="StartContainer for \"4918dbcbbea9a0f6372dc516dd4d723c300f0c3a1a056165ee69cd93c9aaf44e\"" Jan 13 20:19:19.523046 containerd[1640]: time="2025-01-13T20:19:19.522787843Z" level=info msg="CreateContainer within sandbox \"91830ad3c9856bacbf4587c40b5cd02552b91afa08156a815af4b2ab759b9245\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:19:19.544356 containerd[1640]: time="2025-01-13T20:19:19.544192525Z" level=info msg="CreateContainer within sandbox \"91830ad3c9856bacbf4587c40b5cd02552b91afa08156a815af4b2ab759b9245\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d2c266681aa61c669392d9215775ab361217588597a31751dc0cea8a8762c64\"" Jan 13 20:19:19.545118 containerd[1640]: time="2025-01-13T20:19:19.545091690Z" level=info msg="StartContainer for \"8d2c266681aa61c669392d9215775ab361217588597a31751dc0cea8a8762c64\"" Jan 13 20:19:19.600349 kubelet[2711]: W0113 20:19:19.599125 2711 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.153.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:19.600349 kubelet[2711]: E0113 20:19:19.599195 2711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.153.199:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:19.614070 containerd[1640]: time="2025-01-13T20:19:19.614024364Z" level=info msg="StartContainer for \"4918dbcbbea9a0f6372dc516dd4d723c300f0c3a1a056165ee69cd93c9aaf44e\" returns successfully" Jan 13 20:19:19.623563 kubelet[2711]: W0113 20:19:19.621894 2711 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.153.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-9-7c8f4a1e31&limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:19.623563 kubelet[2711]: E0113 20:19:19.621955 2711 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.153.199:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-9-7c8f4a1e31&limit=500&resourceVersion=0": dial tcp 138.199.153.199:6443: connect: connection refused Jan 13 20:19:19.625566 containerd[1640]: time="2025-01-13T20:19:19.624732545Z" level=info msg="StartContainer for \"d32bd557b8428bf0b33d54decd3f33282c015fa29d91aa1b1b864e7bddf79203\" returns successfully" Jan 13 20:19:19.694936 containerd[1640]: time="2025-01-13T20:19:19.694871546Z" level=info msg="StartContainer for \"8d2c266681aa61c669392d9215775ab361217588597a31751dc0cea8a8762c64\" returns successfully" Jan 13 20:19:19.723943 kubelet[2711]: E0113 20:19:19.723903 2711 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.153.199:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-9-7c8f4a1e31?timeout=10s\": dial tcp 138.199.153.199:6443: connect: connection refused" interval="1.6s" Jan 13 20:19:19.832986 kubelet[2711]: I0113 20:19:19.832877 2711 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:21.430722 kubelet[2711]: E0113 20:19:21.430616 2711 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-9-7c8f4a1e31\" not found" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:21.506354 kubelet[2711]: I0113 20:19:21.504885 2711 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:21.528589 kubelet[2711]: E0113 20:19:21.526851 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:21.629573 kubelet[2711]: E0113 20:19:21.629520 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:21.730108 kubelet[2711]: E0113 20:19:21.729975 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:21.830246 kubelet[2711]: E0113 20:19:21.830191 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:21.930690 kubelet[2711]: E0113 20:19:21.930648 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:22.031197 kubelet[2711]: E0113 20:19:22.031072 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:22.131543 kubelet[2711]: E0113 20:19:22.131494 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:22.231991 kubelet[2711]: E0113 20:19:22.231946 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:22.332962 kubelet[2711]: E0113 20:19:22.332849 2711 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-9-7c8f4a1e31\" not found" Jan 13 20:19:23.316938 kubelet[2711]: I0113 20:19:23.316889 2711 apiserver.go:52] "Watching apiserver" Jan 13 20:19:23.420194 kubelet[2711]: I0113 20:19:23.420111 2711 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:19:24.123162 systemd[1]: Reloading requested from client PID 2984 ('systemctl') (unit session-7.scope)... Jan 13 20:19:24.123559 systemd[1]: Reloading... Jan 13 20:19:24.208461 zram_generator::config[3024]: No configuration found. Jan 13 20:19:24.329229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:19:24.401510 systemd[1]: Reloading finished in 277 ms. Jan 13 20:19:24.443591 kubelet[2711]: I0113 20:19:24.443518 2711 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:24.445504 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:24.457000 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:19:24.457737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:24.469872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:19:24.597450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:19:24.606249 (kubelet)[3079]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:19:24.672495 kubelet[3079]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:24.672495 kubelet[3079]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:19:24.672495 kubelet[3079]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:19:24.672495 kubelet[3079]: I0113 20:19:24.671841 3079 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:19:24.678986 kubelet[3079]: I0113 20:19:24.678840 3079 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:19:24.678986 kubelet[3079]: I0113 20:19:24.678873 3079 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:19:24.679154 kubelet[3079]: I0113 20:19:24.679122 3079 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:19:24.681153 kubelet[3079]: I0113 20:19:24.681109 3079 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:19:24.684261 kubelet[3079]: I0113 20:19:24.684226 3079 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:19:24.699885 kubelet[3079]: I0113 20:19:24.699649 3079 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:19:24.701286 kubelet[3079]: I0113 20:19:24.701182 3079 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:19:24.702749 kubelet[3079]: I0113 20:19:24.702252 3079 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:19:24.704037 kubelet[3079]: I0113 20:19:24.702979 3079 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:19:24.704037 kubelet[3079]: I0113 20:19:24.702999 3079 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:19:24.704037 kubelet[3079]: I0113 20:19:24.703042 3079 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:24.704037 kubelet[3079]: I0113 20:19:24.703161 3079 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:19:24.704037 kubelet[3079]: I0113 20:19:24.703177 3079 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:19:24.704037 kubelet[3079]: I0113 20:19:24.703205 3079 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:19:24.704037 kubelet[3079]: I0113 20:19:24.703226 3079 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:19:24.705333 kubelet[3079]: I0113 20:19:24.705249 3079 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:19:24.705629 kubelet[3079]: I0113 20:19:24.705613 3079 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:19:24.706163 kubelet[3079]: I0113 20:19:24.706147 3079 server.go:1256] "Started kubelet" Jan 13 20:19:24.711683 kubelet[3079]: I0113 20:19:24.710831 3079 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:19:24.720135 kubelet[3079]: I0113 20:19:24.718974 3079 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:19:24.721769 kubelet[3079]: I0113 20:19:24.721743 3079 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:19:24.722102 kubelet[3079]: I0113 20:19:24.722086 3079 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:19:24.722249 kubelet[3079]: I0113 20:19:24.711731 3079 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:19:24.727579 kubelet[3079]: I0113 20:19:24.727547 3079 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:19:24.729198 sudo[3093]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 20:19:24.729643 sudo[3093]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 20:19:24.730090 kubelet[3079]: I0113 20:19:24.730071 3079 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:19:24.732959 kubelet[3079]: I0113 20:19:24.731568 3079 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:19:24.737775 kubelet[3079]: I0113 20:19:24.737390 3079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:19:24.744473 kubelet[3079]: I0113 20:19:24.744434 3079 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:19:24.744473 kubelet[3079]: I0113 20:19:24.744476 3079 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:19:24.744603 kubelet[3079]: I0113 20:19:24.744495 3079 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:19:24.744603 kubelet[3079]: E0113 20:19:24.744558 3079 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:19:24.770187 kubelet[3079]: I0113 20:19:24.769992 3079 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:19:24.770187 kubelet[3079]: I0113 20:19:24.770094 3079 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:19:24.791136 kubelet[3079]: E0113 20:19:24.789653 3079 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:19:24.791136 kubelet[3079]: I0113 20:19:24.790943 3079 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:19:24.835077 kubelet[3079]: I0113 20:19:24.835038 3079 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:24.845345 kubelet[3079]: E0113 20:19:24.844756 3079 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:19:24.847344 kubelet[3079]: I0113 20:19:24.846067 3079 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:24.847344 kubelet[3079]: I0113 20:19:24.846204 3079 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:24.868203 kubelet[3079]: I0113 20:19:24.868166 3079 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:19:24.868203 kubelet[3079]: I0113 20:19:24.868205 3079 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:19:24.868391 kubelet[3079]: I0113 20:19:24.868225 3079 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:19:24.868417 kubelet[3079]: I0113 20:19:24.868409 3079 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:19:24.869063 kubelet[3079]: I0113 20:19:24.868482 3079 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:19:24.869063 kubelet[3079]: I0113 20:19:24.868499 3079 policy_none.go:49] "None policy: Start" Jan 13 20:19:24.869524 kubelet[3079]: I0113 20:19:24.869320 3079 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:19:24.869524 kubelet[3079]: I0113 20:19:24.869354 3079 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:19:24.869524 kubelet[3079]: I0113 20:19:24.869490 3079 state_mem.go:75] "Updated machine memory state" Jan 13 20:19:24.874153 kubelet[3079]: I0113 20:19:24.870967 3079 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:19:24.874153 kubelet[3079]: I0113 20:19:24.872714 3079 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:19:25.045688 kubelet[3079]: I0113 20:19:25.045576 3079 topology_manager.go:215] "Topology Admit Handler" podUID="a8e6523ac9e40df6e08120cce888d3c0" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.046111 kubelet[3079]: I0113 20:19:25.046087 3079 topology_manager.go:215] "Topology Admit Handler" podUID="f089cf54e9bae2b7b61ad719b15d6e53" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.046234 kubelet[3079]: I0113 20:19:25.046218 3079 topology_manager.go:215] "Topology Admit Handler" podUID="47948ca4d7dadaa025bba12dd9bef658" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.058384 kubelet[3079]: E0113 20:19:25.058158 3079 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152-2-0-9-7c8f4a1e31\" already exists" pod="kube-system/kube-scheduler-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233610 kubelet[3079]: I0113 20:19:25.233574 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8e6523ac9e40df6e08120cce888d3c0-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"a8e6523ac9e40df6e08120cce888d3c0\") " pod="kube-system/kube-scheduler-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233610 kubelet[3079]: I0113 20:19:25.233618 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f089cf54e9bae2b7b61ad719b15d6e53-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"f089cf54e9bae2b7b61ad719b15d6e53\") " pod="kube-system/kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233769 kubelet[3079]: I0113 20:19:25.233648 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f089cf54e9bae2b7b61ad719b15d6e53-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"f089cf54e9bae2b7b61ad719b15d6e53\") " pod="kube-system/kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233769 kubelet[3079]: I0113 20:19:25.233671 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233769 kubelet[3079]: I0113 20:19:25.233692 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233769 kubelet[3079]: I0113 20:19:25.233727 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f089cf54e9bae2b7b61ad719b15d6e53-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"f089cf54e9bae2b7b61ad719b15d6e53\") " pod="kube-system/kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233769 kubelet[3079]: I0113 20:19:25.233749 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233882 kubelet[3079]: I0113 20:19:25.233769 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.233882 kubelet[3079]: I0113 20:19:25.233789 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47948ca4d7dadaa025bba12dd9bef658-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31\" (UID: \"47948ca4d7dadaa025bba12dd9bef658\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.246495 sudo[3093]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:25.704073 kubelet[3079]: I0113 20:19:25.704013 3079 apiserver.go:52] "Watching apiserver" Jan 13 20:19:25.732136 kubelet[3079]: I0113 20:19:25.732080 3079 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:19:25.833517 kubelet[3079]: E0113 20:19:25.833476 3079 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152-2-0-9-7c8f4a1e31\" already exists" pod="kube-system/kube-scheduler-ci-4152-2-0-9-7c8f4a1e31" Jan 13 20:19:25.873883 kubelet[3079]: I0113 20:19:25.873840 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-9-7c8f4a1e31" podStartSLOduration=0.873781154 podStartE2EDuration="873.781154ms" podCreationTimestamp="2025-01-13 20:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:25.872599308 +0000 UTC m=+1.260993426" watchObservedRunningTime="2025-01-13 20:19:25.873781154 +0000 UTC m=+1.262175232" Jan 13 20:19:25.874108 kubelet[3079]: I0113 20:19:25.873963 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-9-7c8f4a1e31" podStartSLOduration=3.873945874 podStartE2EDuration="3.873945874s" podCreationTimestamp="2025-01-13 20:19:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:25.857007749 +0000 UTC m=+1.245401827" watchObservedRunningTime="2025-01-13 20:19:25.873945874 +0000 UTC m=+1.262339952" Jan 13 20:19:25.886100 kubelet[3079]: I0113 20:19:25.886046 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-9-7c8f4a1e31" podStartSLOduration=0.885982695 podStartE2EDuration="885.982695ms" podCreationTimestamp="2025-01-13 20:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:25.883171681 +0000 UTC m=+1.271565799" watchObservedRunningTime="2025-01-13 20:19:25.885982695 +0000 UTC m=+1.274376733" Jan 13 20:19:26.921976 sudo[2098]: pam_unix(sudo:session): session closed for user root Jan 13 20:19:27.081805 sshd[2097]: Connection closed by 147.75.109.163 port 34312 Jan 13 20:19:27.082577 sshd-session[2094]: pam_unix(sshd:session): session closed for user core Jan 13 20:19:27.088917 systemd[1]: sshd@6-138.199.153.199:22-147.75.109.163:34312.service: Deactivated successfully. Jan 13 20:19:27.089560 systemd-logind[1612]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:19:27.093636 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:19:27.095415 systemd-logind[1612]: Removed session 7. Jan 13 20:19:36.642372 kubelet[3079]: I0113 20:19:36.642340 3079 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:19:36.643592 kubelet[3079]: I0113 20:19:36.643038 3079 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:19:36.643625 containerd[1640]: time="2025-01-13T20:19:36.642785654Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:19:37.623858 kubelet[3079]: I0113 20:19:37.622564 3079 topology_manager.go:215] "Topology Admit Handler" podUID="83e1176e-d1e3-4eb0-9df0-04700115a8e6" podNamespace="kube-system" podName="kube-proxy-z4fwk" Jan 13 20:19:37.644712 kubelet[3079]: I0113 20:19:37.644674 3079 topology_manager.go:215] "Topology Admit Handler" podUID="d52cfc35-c25c-44c0-9016-71d43cacf0f3" podNamespace="kube-system" podName="cilium-zwhbv" Jan 13 20:19:37.720381 kubelet[3079]: I0113 20:19:37.716587 3079 topology_manager.go:215] "Topology Admit Handler" podUID="71db5af9-23d1-4da9-a2d3-888d7e0ee85e" podNamespace="kube-system" podName="cilium-operator-5cc964979-r52km" Jan 13 20:19:37.721385 kubelet[3079]: I0113 20:19:37.721186 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83e1176e-d1e3-4eb0-9df0-04700115a8e6-xtables-lock\") pod \"kube-proxy-z4fwk\" (UID: \"83e1176e-d1e3-4eb0-9df0-04700115a8e6\") " pod="kube-system/kube-proxy-z4fwk" Jan 13 20:19:37.722237 kubelet[3079]: I0113 20:19:37.722215 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-cgroup\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.722426 kubelet[3079]: I0113 20:19:37.722413 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cni-path\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.722494 kubelet[3079]: I0113 20:19:37.722486 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83e1176e-d1e3-4eb0-9df0-04700115a8e6-lib-modules\") pod \"kube-proxy-z4fwk\" (UID: \"83e1176e-d1e3-4eb0-9df0-04700115a8e6\") " pod="kube-system/kube-proxy-z4fwk" Jan 13 20:19:37.722585 kubelet[3079]: I0113 20:19:37.722572 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83e1176e-d1e3-4eb0-9df0-04700115a8e6-kube-proxy\") pod \"kube-proxy-z4fwk\" (UID: \"83e1176e-d1e3-4eb0-9df0-04700115a8e6\") " pod="kube-system/kube-proxy-z4fwk" Jan 13 20:19:37.722769 kubelet[3079]: I0113 20:19:37.722652 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-etc-cni-netd\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.723384 kubelet[3079]: I0113 20:19:37.722852 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-xtables-lock\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.723511 kubelet[3079]: I0113 20:19:37.723496 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d52cfc35-c25c-44c0-9016-71d43cacf0f3-clustermesh-secrets\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.723582 kubelet[3079]: I0113 20:19:37.723574 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-config-path\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.723651 kubelet[3079]: I0113 20:19:37.723640 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-host-proc-sys-net\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.723740 kubelet[3079]: I0113 20:19:37.723728 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-host-proc-sys-kernel\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.723829 kubelet[3079]: I0113 20:19:37.723801 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6n4l\" (UniqueName: \"kubernetes.io/projected/d52cfc35-c25c-44c0-9016-71d43cacf0f3-kube-api-access-h6n4l\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.724151 kubelet[3079]: I0113 20:19:37.724135 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d52cfc35-c25c-44c0-9016-71d43cacf0f3-hubble-tls\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.724691 kubelet[3079]: I0113 20:19:37.724239 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlmwg\" (UniqueName: \"kubernetes.io/projected/83e1176e-d1e3-4eb0-9df0-04700115a8e6-kube-api-access-nlmwg\") pod \"kube-proxy-z4fwk\" (UID: \"83e1176e-d1e3-4eb0-9df0-04700115a8e6\") " pod="kube-system/kube-proxy-z4fwk" Jan 13 20:19:37.724827 kubelet[3079]: I0113 20:19:37.724811 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-bpf-maps\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.724891 kubelet[3079]: I0113 20:19:37.724883 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-lib-modules\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.725612 kubelet[3079]: I0113 20:19:37.724955 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-run\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.725726 kubelet[3079]: I0113 20:19:37.725713 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-hostproc\") pod \"cilium-zwhbv\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " pod="kube-system/cilium-zwhbv" Jan 13 20:19:37.827521 kubelet[3079]: I0113 20:19:37.827479 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71db5af9-23d1-4da9-a2d3-888d7e0ee85e-cilium-config-path\") pod \"cilium-operator-5cc964979-r52km\" (UID: \"71db5af9-23d1-4da9-a2d3-888d7e0ee85e\") " pod="kube-system/cilium-operator-5cc964979-r52km" Jan 13 20:19:37.828370 kubelet[3079]: I0113 20:19:37.827812 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqm6t\" (UniqueName: \"kubernetes.io/projected/71db5af9-23d1-4da9-a2d3-888d7e0ee85e-kube-api-access-bqm6t\") pod \"cilium-operator-5cc964979-r52km\" (UID: \"71db5af9-23d1-4da9-a2d3-888d7e0ee85e\") " pod="kube-system/cilium-operator-5cc964979-r52km" Jan 13 20:19:37.932396 containerd[1640]: time="2025-01-13T20:19:37.931342722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4fwk,Uid:83e1176e-d1e3-4eb0-9df0-04700115a8e6,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:37.954978 containerd[1640]: time="2025-01-13T20:19:37.954864577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zwhbv,Uid:d52cfc35-c25c-44c0-9016-71d43cacf0f3,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:37.977462 containerd[1640]: time="2025-01-13T20:19:37.977119706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:37.977782 containerd[1640]: time="2025-01-13T20:19:37.977466387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:37.977782 containerd[1640]: time="2025-01-13T20:19:37.977533867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:37.978082 containerd[1640]: time="2025-01-13T20:19:37.977806309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:37.987017 containerd[1640]: time="2025-01-13T20:19:37.986905425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:37.987179 containerd[1640]: time="2025-01-13T20:19:37.987051026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:37.987179 containerd[1640]: time="2025-01-13T20:19:37.987086266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:37.988683 containerd[1640]: time="2025-01-13T20:19:37.988578792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:38.031959 containerd[1640]: time="2025-01-13T20:19:38.031905443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-r52km,Uid:71db5af9-23d1-4da9-a2d3-888d7e0ee85e,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:38.044045 containerd[1640]: time="2025-01-13T20:19:38.043944611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zwhbv,Uid:d52cfc35-c25c-44c0-9016-71d43cacf0f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\"" Jan 13 20:19:38.048104 containerd[1640]: time="2025-01-13T20:19:38.048044467Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:19:38.057786 containerd[1640]: time="2025-01-13T20:19:38.057745465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4fwk,Uid:83e1176e-d1e3-4eb0-9df0-04700115a8e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2809abf88f280d71e6da66bf9d9a322a8428e5ada3ed1e3617064e31dc0f3170\"" Jan 13 20:19:38.063957 containerd[1640]: time="2025-01-13T20:19:38.063912609Z" level=info msg="CreateContainer within sandbox \"2809abf88f280d71e6da66bf9d9a322a8428e5ada3ed1e3617064e31dc0f3170\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:19:38.080240 containerd[1640]: time="2025-01-13T20:19:38.079956112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:19:38.080240 containerd[1640]: time="2025-01-13T20:19:38.080118073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:19:38.080240 containerd[1640]: time="2025-01-13T20:19:38.080138193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:38.080805 containerd[1640]: time="2025-01-13T20:19:38.080712075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:19:38.084142 containerd[1640]: time="2025-01-13T20:19:38.084098529Z" level=info msg="CreateContainer within sandbox \"2809abf88f280d71e6da66bf9d9a322a8428e5ada3ed1e3617064e31dc0f3170\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3df3d077692a56b748a5edfdd9cc249c04e20b221cbc0dd3fa4d1cf7a351bbe7\"" Jan 13 20:19:38.084862 containerd[1640]: time="2025-01-13T20:19:38.084828812Z" level=info msg="StartContainer for \"3df3d077692a56b748a5edfdd9cc249c04e20b221cbc0dd3fa4d1cf7a351bbe7\"" Jan 13 20:19:38.140068 containerd[1640]: time="2025-01-13T20:19:38.140032069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-r52km,Uid:71db5af9-23d1-4da9-a2d3-888d7e0ee85e,Namespace:kube-system,Attempt:0,} returns sandbox id \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\"" Jan 13 20:19:38.160083 containerd[1640]: time="2025-01-13T20:19:38.159903427Z" level=info msg="StartContainer for \"3df3d077692a56b748a5edfdd9cc249c04e20b221cbc0dd3fa4d1cf7a351bbe7\" returns successfully" Jan 13 20:19:38.879527 kubelet[3079]: I0113 20:19:38.878946 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-z4fwk" podStartSLOduration=1.8788807379999999 podStartE2EDuration="1.878880738s" podCreationTimestamp="2025-01-13 20:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:19:38.878349496 +0000 UTC m=+14.266743614" watchObservedRunningTime="2025-01-13 20:19:38.878880738 +0000 UTC m=+14.267274816" Jan 13 20:19:53.412129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937560588.mount: Deactivated successfully. Jan 13 20:19:54.774762 containerd[1640]: time="2025-01-13T20:19:54.774675824Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:54.776321 containerd[1640]: time="2025-01-13T20:19:54.775679627Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650962" Jan 13 20:19:54.778603 containerd[1640]: time="2025-01-13T20:19:54.778557436Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:54.780945 containerd[1640]: time="2025-01-13T20:19:54.780901043Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 16.732804256s" Jan 13 20:19:54.781067 containerd[1640]: time="2025-01-13T20:19:54.780955364Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:19:54.782401 containerd[1640]: time="2025-01-13T20:19:54.782361408Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:19:54.784404 containerd[1640]: time="2025-01-13T20:19:54.784333934Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:19:54.809931 containerd[1640]: time="2025-01-13T20:19:54.807855967Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\"" Jan 13 20:19:54.810541 containerd[1640]: time="2025-01-13T20:19:54.810330374Z" level=info msg="StartContainer for \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\"" Jan 13 20:19:54.885056 containerd[1640]: time="2025-01-13T20:19:54.884990084Z" level=info msg="StartContainer for \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\" returns successfully" Jan 13 20:19:54.952471 containerd[1640]: time="2025-01-13T20:19:54.952424813Z" level=error msg="collecting metrics for 115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590" error="cgroups: cgroup deleted: unknown" Jan 13 20:19:55.167836 containerd[1640]: time="2025-01-13T20:19:55.167759070Z" level=info msg="shim disconnected" id=115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590 namespace=k8s.io Jan 13 20:19:55.167836 containerd[1640]: time="2025-01-13T20:19:55.167828710Z" level=warning msg="cleaning up after shim disconnected" id=115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590 namespace=k8s.io Jan 13 20:19:55.167836 containerd[1640]: time="2025-01-13T20:19:55.167839590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:55.806921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590-rootfs.mount: Deactivated successfully. Jan 13 20:19:55.910605 containerd[1640]: time="2025-01-13T20:19:55.910231771Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:19:55.946584 containerd[1640]: time="2025-01-13T20:19:55.946455081Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\"" Jan 13 20:19:55.947479 containerd[1640]: time="2025-01-13T20:19:55.947185683Z" level=info msg="StartContainer for \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\"" Jan 13 20:19:56.009023 containerd[1640]: time="2025-01-13T20:19:56.008900591Z" level=info msg="StartContainer for \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\" returns successfully" Jan 13 20:19:56.021910 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:19:56.022191 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:19:56.022323 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:19:56.031136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:19:56.054945 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:19:56.072655 containerd[1640]: time="2025-01-13T20:19:56.072505342Z" level=info msg="shim disconnected" id=efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530 namespace=k8s.io Jan 13 20:19:56.072655 containerd[1640]: time="2025-01-13T20:19:56.072565262Z" level=warning msg="cleaning up after shim disconnected" id=efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530 namespace=k8s.io Jan 13 20:19:56.072655 containerd[1640]: time="2025-01-13T20:19:56.072575462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:56.808660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530-rootfs.mount: Deactivated successfully. Jan 13 20:19:56.916733 containerd[1640]: time="2025-01-13T20:19:56.916550799Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:19:56.940267 containerd[1640]: time="2025-01-13T20:19:56.940021270Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\"" Jan 13 20:19:56.942433 containerd[1640]: time="2025-01-13T20:19:56.942397557Z" level=info msg="StartContainer for \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\"" Jan 13 20:19:57.009596 containerd[1640]: time="2025-01-13T20:19:57.009548319Z" level=info msg="StartContainer for \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\" returns successfully" Jan 13 20:19:57.044695 containerd[1640]: time="2025-01-13T20:19:57.044587823Z" level=info msg="shim disconnected" id=6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61 namespace=k8s.io Jan 13 20:19:57.044892 containerd[1640]: time="2025-01-13T20:19:57.044696303Z" level=warning msg="cleaning up after shim disconnected" id=6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61 namespace=k8s.io Jan 13 20:19:57.044892 containerd[1640]: time="2025-01-13T20:19:57.044721023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:57.064971 containerd[1640]: time="2025-01-13T20:19:57.064841923Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:19:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:19:57.796650 containerd[1640]: time="2025-01-13T20:19:57.796574695Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:57.797969 containerd[1640]: time="2025-01-13T20:19:57.797901739Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138342" Jan 13 20:19:57.798186 containerd[1640]: time="2025-01-13T20:19:57.798111819Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:19:57.800458 containerd[1640]: time="2025-01-13T20:19:57.800273106Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.017867258s" Jan 13 20:19:57.800458 containerd[1640]: time="2025-01-13T20:19:57.800327666Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:19:57.805935 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61-rootfs.mount: Deactivated successfully. Jan 13 20:19:57.808121 containerd[1640]: time="2025-01-13T20:19:57.807783728Z" level=info msg="CreateContainer within sandbox \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:19:57.827949 containerd[1640]: time="2025-01-13T20:19:57.827899588Z" level=info msg="CreateContainer within sandbox \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\"" Jan 13 20:19:57.829452 containerd[1640]: time="2025-01-13T20:19:57.829397992Z" level=info msg="StartContainer for \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\"" Jan 13 20:19:57.891879 containerd[1640]: time="2025-01-13T20:19:57.891790217Z" level=info msg="StartContainer for \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\" returns successfully" Jan 13 20:19:57.925475 containerd[1640]: time="2025-01-13T20:19:57.925426997Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:19:57.943507 containerd[1640]: time="2025-01-13T20:19:57.943256650Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\"" Jan 13 20:19:57.944794 containerd[1640]: time="2025-01-13T20:19:57.944110853Z" level=info msg="StartContainer for \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\"" Jan 13 20:19:57.964615 kubelet[3079]: I0113 20:19:57.964143 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-r52km" podStartSLOduration=1.30490336 podStartE2EDuration="20.964097992s" podCreationTimestamp="2025-01-13 20:19:37 +0000 UTC" firstStartedPulling="2025-01-13 20:19:38.141936916 +0000 UTC m=+13.530330994" lastFinishedPulling="2025-01-13 20:19:57.801131548 +0000 UTC m=+33.189525626" observedRunningTime="2025-01-13 20:19:57.963091909 +0000 UTC m=+33.351485987" watchObservedRunningTime="2025-01-13 20:19:57.964097992 +0000 UTC m=+33.352492070" Jan 13 20:19:58.039893 containerd[1640]: time="2025-01-13T20:19:58.039344254Z" level=info msg="StartContainer for \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\" returns successfully" Jan 13 20:19:58.115984 containerd[1640]: time="2025-01-13T20:19:58.115916118Z" level=info msg="shim disconnected" id=7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026 namespace=k8s.io Jan 13 20:19:58.116209 containerd[1640]: time="2025-01-13T20:19:58.115989039Z" level=warning msg="cleaning up after shim disconnected" id=7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026 namespace=k8s.io Jan 13 20:19:58.116209 containerd[1640]: time="2025-01-13T20:19:58.116039799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:19:58.940082 containerd[1640]: time="2025-01-13T20:19:58.940028814Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:19:58.960266 containerd[1640]: time="2025-01-13T20:19:58.960204714Z" level=info msg="CreateContainer within sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\"" Jan 13 20:19:58.961016 containerd[1640]: time="2025-01-13T20:19:58.960971996Z" level=info msg="StartContainer for \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\"" Jan 13 20:19:59.036656 containerd[1640]: time="2025-01-13T20:19:59.036576976Z" level=info msg="StartContainer for \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\" returns successfully" Jan 13 20:19:59.122073 kubelet[3079]: I0113 20:19:59.121550 3079 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:19:59.160797 kubelet[3079]: I0113 20:19:59.157711 3079 topology_manager.go:215] "Topology Admit Handler" podUID="02597b82-4ed9-4084-a74a-1ca4e81f96c4" podNamespace="kube-system" podName="coredns-76f75df574-vv5p5" Jan 13 20:19:59.170121 kubelet[3079]: I0113 20:19:59.170065 3079 topology_manager.go:215] "Topology Admit Handler" podUID="1fb868fc-b73c-422d-b6e8-e58f52193d86" podNamespace="kube-system" podName="coredns-76f75df574-5dxmf" Jan 13 20:19:59.184326 kubelet[3079]: I0113 20:19:59.183517 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02597b82-4ed9-4084-a74a-1ca4e81f96c4-config-volume\") pod \"coredns-76f75df574-vv5p5\" (UID: \"02597b82-4ed9-4084-a74a-1ca4e81f96c4\") " pod="kube-system/coredns-76f75df574-vv5p5" Jan 13 20:19:59.184326 kubelet[3079]: I0113 20:19:59.183608 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wzgq\" (UniqueName: \"kubernetes.io/projected/02597b82-4ed9-4084-a74a-1ca4e81f96c4-kube-api-access-6wzgq\") pod \"coredns-76f75df574-vv5p5\" (UID: \"02597b82-4ed9-4084-a74a-1ca4e81f96c4\") " pod="kube-system/coredns-76f75df574-vv5p5" Jan 13 20:19:59.284697 kubelet[3079]: I0113 20:19:59.284559 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fb868fc-b73c-422d-b6e8-e58f52193d86-config-volume\") pod \"coredns-76f75df574-5dxmf\" (UID: \"1fb868fc-b73c-422d-b6e8-e58f52193d86\") " pod="kube-system/coredns-76f75df574-5dxmf" Jan 13 20:19:59.284697 kubelet[3079]: I0113 20:19:59.284626 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzsqr\" (UniqueName: \"kubernetes.io/projected/1fb868fc-b73c-422d-b6e8-e58f52193d86-kube-api-access-fzsqr\") pod \"coredns-76f75df574-5dxmf\" (UID: \"1fb868fc-b73c-422d-b6e8-e58f52193d86\") " pod="kube-system/coredns-76f75df574-5dxmf" Jan 13 20:19:59.468460 containerd[1640]: time="2025-01-13T20:19:59.467148023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vv5p5,Uid:02597b82-4ed9-4084-a74a-1ca4e81f96c4,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:59.498897 containerd[1640]: time="2025-01-13T20:19:59.498590994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dxmf,Uid:1fb868fc-b73c-422d-b6e8-e58f52193d86,Namespace:kube-system,Attempt:0,}" Jan 13 20:19:59.970370 kubelet[3079]: I0113 20:19:59.970270 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zwhbv" podStartSLOduration=6.236080739 podStartE2EDuration="22.97022492s" podCreationTimestamp="2025-01-13 20:19:37 +0000 UTC" firstStartedPulling="2025-01-13 20:19:38.047241704 +0000 UTC m=+13.435635782" lastFinishedPulling="2025-01-13 20:19:54.781385845 +0000 UTC m=+30.169779963" observedRunningTime="2025-01-13 20:19:59.967099111 +0000 UTC m=+35.355493229" watchObservedRunningTime="2025-01-13 20:19:59.97022492 +0000 UTC m=+35.358618998" Jan 13 20:20:02.071241 systemd-networkd[1242]: cilium_host: Link UP Jan 13 20:20:02.071407 systemd-networkd[1242]: cilium_net: Link UP Jan 13 20:20:02.071410 systemd-networkd[1242]: cilium_net: Gained carrier Jan 13 20:20:02.072480 systemd-networkd[1242]: cilium_host: Gained carrier Jan 13 20:20:02.202838 systemd-networkd[1242]: cilium_vxlan: Link UP Jan 13 20:20:02.202846 systemd-networkd[1242]: cilium_vxlan: Gained carrier Jan 13 20:20:02.339537 systemd-networkd[1242]: cilium_net: Gained IPv6LL Jan 13 20:20:02.517411 kernel: NET: Registered PF_ALG protocol family Jan 13 20:20:02.828221 systemd-networkd[1242]: cilium_host: Gained IPv6LL Jan 13 20:20:03.259517 systemd-networkd[1242]: lxc_health: Link UP Jan 13 20:20:03.265657 systemd-networkd[1242]: lxc_health: Gained carrier Jan 13 20:20:03.467438 systemd-networkd[1242]: cilium_vxlan: Gained IPv6LL Jan 13 20:20:03.559642 systemd-networkd[1242]: lxcf457b6e3522e: Link UP Jan 13 20:20:03.565347 kernel: eth0: renamed from tmp7c902 Jan 13 20:20:03.573578 systemd-networkd[1242]: lxcf457b6e3522e: Gained carrier Jan 13 20:20:03.600451 systemd-networkd[1242]: tmpdb246: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:20:03.600550 systemd-networkd[1242]: tmpdb246: Cannot enable IPv6, ignoring: No such file or directory Jan 13 20:20:03.600614 systemd-networkd[1242]: tmpdb246: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory Jan 13 20:20:03.600643 systemd-networkd[1242]: tmpdb246: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory Jan 13 20:20:03.600658 systemd-networkd[1242]: tmpdb246: Cannot set IPv6 proxy NDP, ignoring: No such file or directory Jan 13 20:20:03.600672 systemd-networkd[1242]: tmpdb246: Cannot enable promote_secondaries for interface, ignoring: No such file or directory Jan 13 20:20:03.600989 systemd-networkd[1242]: lxc5c369245693b: Link UP Jan 13 20:20:03.603436 kernel: eth0: renamed from tmpdb246 Jan 13 20:20:03.611853 systemd-networkd[1242]: lxc5c369245693b: Gained carrier Jan 13 20:20:05.004677 systemd-networkd[1242]: lxcf457b6e3522e: Gained IPv6LL Jan 13 20:20:05.196106 systemd-networkd[1242]: lxc_health: Gained IPv6LL Jan 13 20:20:05.259941 systemd-networkd[1242]: lxc5c369245693b: Gained IPv6LL Jan 13 20:20:07.895512 containerd[1640]: time="2025-01-13T20:20:07.895225668Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:20:07.897390 containerd[1640]: time="2025-01-13T20:20:07.896267151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:20:07.897390 containerd[1640]: time="2025-01-13T20:20:07.896357391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:07.897390 containerd[1640]: time="2025-01-13T20:20:07.896587712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:07.918617 containerd[1640]: time="2025-01-13T20:20:07.918115969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:20:07.918916 containerd[1640]: time="2025-01-13T20:20:07.918764211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:20:07.920310 containerd[1640]: time="2025-01-13T20:20:07.919145852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:07.922389 containerd[1640]: time="2025-01-13T20:20:07.920677296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:20:08.034806 containerd[1640]: time="2025-01-13T20:20:08.034024995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5dxmf,Uid:1fb868fc-b73c-422d-b6e8-e58f52193d86,Namespace:kube-system,Attempt:0,} returns sandbox id \"db246cd7550fb96add618b0db4d264a77317331c34950e49cf23322547150e6e\"" Jan 13 20:20:08.044991 containerd[1640]: time="2025-01-13T20:20:08.044542103Z" level=info msg="CreateContainer within sandbox \"db246cd7550fb96add618b0db4d264a77317331c34950e49cf23322547150e6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:20:08.068566 containerd[1640]: time="2025-01-13T20:20:08.068522446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vv5p5,Uid:02597b82-4ed9-4084-a74a-1ca4e81f96c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c9026166a2def756699a727e07245fa79db70c5ae75edc6d85d4974172aeb6d\"" Jan 13 20:20:08.074611 containerd[1640]: time="2025-01-13T20:20:08.074473741Z" level=info msg="CreateContainer within sandbox \"7c9026166a2def756699a727e07245fa79db70c5ae75edc6d85d4974172aeb6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:20:08.085189 containerd[1640]: time="2025-01-13T20:20:08.085086929Z" level=info msg="CreateContainer within sandbox \"db246cd7550fb96add618b0db4d264a77317331c34950e49cf23322547150e6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d17ebdb1272682de47ba8ac71006203906c4b161c7db270d9ecccde3b24828b3\"" Jan 13 20:20:08.086264 containerd[1640]: time="2025-01-13T20:20:08.086233012Z" level=info msg="StartContainer for \"d17ebdb1272682de47ba8ac71006203906c4b161c7db270d9ecccde3b24828b3\"" Jan 13 20:20:08.093736 containerd[1640]: time="2025-01-13T20:20:08.093592031Z" level=info msg="CreateContainer within sandbox \"7c9026166a2def756699a727e07245fa79db70c5ae75edc6d85d4974172aeb6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa592ca327da7702c10f86a5acc88f0fc2f098a59d14331b33ef5aafb5c5abaa\"" Jan 13 20:20:08.095866 containerd[1640]: time="2025-01-13T20:20:08.095355236Z" level=info msg="StartContainer for \"fa592ca327da7702c10f86a5acc88f0fc2f098a59d14331b33ef5aafb5c5abaa\"" Jan 13 20:20:08.170172 containerd[1640]: time="2025-01-13T20:20:08.168549188Z" level=info msg="StartContainer for \"d17ebdb1272682de47ba8ac71006203906c4b161c7db270d9ecccde3b24828b3\" returns successfully" Jan 13 20:20:08.195650 containerd[1640]: time="2025-01-13T20:20:08.195274258Z" level=info msg="StartContainer for \"fa592ca327da7702c10f86a5acc88f0fc2f098a59d14331b33ef5aafb5c5abaa\" returns successfully" Jan 13 20:20:09.010054 kubelet[3079]: I0113 20:20:09.008537 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vv5p5" podStartSLOduration=32.008484631 podStartE2EDuration="32.008484631s" podCreationTimestamp="2025-01-13 20:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:20:08.988489539 +0000 UTC m=+44.376883697" watchObservedRunningTime="2025-01-13 20:20:09.008484631 +0000 UTC m=+44.396878749" Jan 13 20:24:14.827770 systemd[1]: Started sshd@7-138.199.153.199:22-147.75.109.163:57748.service - OpenSSH per-connection server daemon (147.75.109.163:57748). Jan 13 20:24:15.825217 sshd[4489]: Accepted publickey for core from 147.75.109.163 port 57748 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:15.827079 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:15.834034 systemd-logind[1612]: New session 8 of user core. Jan 13 20:24:15.847001 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:24:16.610562 sshd[4492]: Connection closed by 147.75.109.163 port 57748 Jan 13 20:24:16.611034 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:16.616061 systemd-logind[1612]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:24:16.616368 systemd[1]: sshd@7-138.199.153.199:22-147.75.109.163:57748.service: Deactivated successfully. Jan 13 20:24:16.621017 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:24:16.622555 systemd-logind[1612]: Removed session 8. Jan 13 20:24:21.777666 systemd[1]: Started sshd@8-138.199.153.199:22-147.75.109.163:36410.service - OpenSSH per-connection server daemon (147.75.109.163:36410). Jan 13 20:24:22.760811 sshd[4504]: Accepted publickey for core from 147.75.109.163 port 36410 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:22.762723 sshd-session[4504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:22.770657 systemd-logind[1612]: New session 9 of user core. Jan 13 20:24:22.776050 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:24:23.512782 sshd[4507]: Connection closed by 147.75.109.163 port 36410 Jan 13 20:24:23.513516 sshd-session[4504]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:23.517271 systemd[1]: sshd@8-138.199.153.199:22-147.75.109.163:36410.service: Deactivated successfully. Jan 13 20:24:23.522292 systemd-logind[1612]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:24:23.523258 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:24:23.526768 systemd-logind[1612]: Removed session 9. Jan 13 20:24:28.681601 systemd[1]: Started sshd@9-138.199.153.199:22-147.75.109.163:51126.service - OpenSSH per-connection server daemon (147.75.109.163:51126). Jan 13 20:24:29.668319 sshd[4520]: Accepted publickey for core from 147.75.109.163 port 51126 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:29.670407 sshd-session[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:29.677999 systemd-logind[1612]: New session 10 of user core. Jan 13 20:24:29.682671 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:24:30.420960 sshd[4523]: Connection closed by 147.75.109.163 port 51126 Jan 13 20:24:30.421895 sshd-session[4520]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:30.425457 systemd[1]: sshd@9-138.199.153.199:22-147.75.109.163:51126.service: Deactivated successfully. Jan 13 20:24:30.429405 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:24:30.431823 systemd-logind[1612]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:24:30.433094 systemd-logind[1612]: Removed session 10. Jan 13 20:24:30.594049 systemd[1]: Started sshd@10-138.199.153.199:22-147.75.109.163:51136.service - OpenSSH per-connection server daemon (147.75.109.163:51136). Jan 13 20:24:31.582790 sshd[4534]: Accepted publickey for core from 147.75.109.163 port 51136 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:31.584282 sshd-session[4534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:31.593984 systemd-logind[1612]: New session 11 of user core. Jan 13 20:24:31.601720 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:24:32.390283 sshd[4537]: Connection closed by 147.75.109.163 port 51136 Jan 13 20:24:32.390180 sshd-session[4534]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:32.396963 systemd-logind[1612]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:24:32.398180 systemd[1]: sshd@10-138.199.153.199:22-147.75.109.163:51136.service: Deactivated successfully. Jan 13 20:24:32.409206 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:24:32.410504 systemd-logind[1612]: Removed session 11. Jan 13 20:24:32.553585 systemd[1]: Started sshd@11-138.199.153.199:22-147.75.109.163:51144.service - OpenSSH per-connection server daemon (147.75.109.163:51144). Jan 13 20:24:33.553546 sshd[4546]: Accepted publickey for core from 147.75.109.163 port 51144 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:33.555861 sshd-session[4546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:33.563800 systemd-logind[1612]: New session 12 of user core. Jan 13 20:24:33.570360 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:24:34.313986 sshd[4549]: Connection closed by 147.75.109.163 port 51144 Jan 13 20:24:34.314682 sshd-session[4546]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:34.319650 systemd[1]: sshd@11-138.199.153.199:22-147.75.109.163:51144.service: Deactivated successfully. Jan 13 20:24:34.323869 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:24:34.324486 systemd-logind[1612]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:24:34.325656 systemd-logind[1612]: Removed session 12. Jan 13 20:24:39.483704 systemd[1]: Started sshd@12-138.199.153.199:22-147.75.109.163:52058.service - OpenSSH per-connection server daemon (147.75.109.163:52058). Jan 13 20:24:40.476691 sshd[4562]: Accepted publickey for core from 147.75.109.163 port 52058 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:40.478950 sshd-session[4562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:40.484346 systemd-logind[1612]: New session 13 of user core. Jan 13 20:24:40.488755 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:24:41.249347 sshd[4565]: Connection closed by 147.75.109.163 port 52058 Jan 13 20:24:41.248381 sshd-session[4562]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:41.253213 systemd[1]: sshd@12-138.199.153.199:22-147.75.109.163:52058.service: Deactivated successfully. Jan 13 20:24:41.257103 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:24:41.257246 systemd-logind[1612]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:24:41.259095 systemd-logind[1612]: Removed session 13. Jan 13 20:24:41.416143 systemd[1]: Started sshd@13-138.199.153.199:22-147.75.109.163:52066.service - OpenSSH per-connection server daemon (147.75.109.163:52066). Jan 13 20:24:42.411605 sshd[4575]: Accepted publickey for core from 147.75.109.163 port 52066 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:42.414596 sshd-session[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:42.420545 systemd-logind[1612]: New session 14 of user core. Jan 13 20:24:42.425744 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:24:43.214456 sshd[4578]: Connection closed by 147.75.109.163 port 52066 Jan 13 20:24:43.215357 sshd-session[4575]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:43.220666 systemd[1]: sshd@13-138.199.153.199:22-147.75.109.163:52066.service: Deactivated successfully. Jan 13 20:24:43.225287 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:24:43.227254 systemd-logind[1612]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:24:43.230910 systemd-logind[1612]: Removed session 14. Jan 13 20:24:43.379141 systemd[1]: Started sshd@14-138.199.153.199:22-147.75.109.163:52072.service - OpenSSH per-connection server daemon (147.75.109.163:52072). Jan 13 20:24:44.365246 sshd[4587]: Accepted publickey for core from 147.75.109.163 port 52072 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:44.367809 sshd-session[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:44.374377 systemd-logind[1612]: New session 15 of user core. Jan 13 20:24:44.381391 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:24:46.709635 sshd[4590]: Connection closed by 147.75.109.163 port 52072 Jan 13 20:24:46.709381 sshd-session[4587]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:46.713917 systemd[1]: sshd@14-138.199.153.199:22-147.75.109.163:52072.service: Deactivated successfully. Jan 13 20:24:46.717630 systemd-logind[1612]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:24:46.718029 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:24:46.719947 systemd-logind[1612]: Removed session 15. Jan 13 20:24:46.876645 systemd[1]: Started sshd@15-138.199.153.199:22-147.75.109.163:52086.service - OpenSSH per-connection server daemon (147.75.109.163:52086). Jan 13 20:24:47.872984 sshd[4606]: Accepted publickey for core from 147.75.109.163 port 52086 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:47.875356 sshd-session[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:47.881010 systemd-logind[1612]: New session 16 of user core. Jan 13 20:24:47.885708 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:24:48.744204 sshd[4609]: Connection closed by 147.75.109.163 port 52086 Jan 13 20:24:48.745038 sshd-session[4606]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:48.751433 systemd[1]: sshd@15-138.199.153.199:22-147.75.109.163:52086.service: Deactivated successfully. Jan 13 20:24:48.753589 systemd-logind[1612]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:24:48.756554 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:24:48.758715 systemd-logind[1612]: Removed session 16. Jan 13 20:24:48.907618 systemd[1]: Started sshd@16-138.199.153.199:22-147.75.109.163:51346.service - OpenSSH per-connection server daemon (147.75.109.163:51346). Jan 13 20:24:49.893892 sshd[4617]: Accepted publickey for core from 147.75.109.163 port 51346 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:49.896283 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:49.901064 systemd-logind[1612]: New session 17 of user core. Jan 13 20:24:49.908758 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:24:50.654063 sshd[4620]: Connection closed by 147.75.109.163 port 51346 Jan 13 20:24:50.656163 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:50.662385 systemd[1]: sshd@16-138.199.153.199:22-147.75.109.163:51346.service: Deactivated successfully. Jan 13 20:24:50.667047 systemd-logind[1612]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:24:50.668497 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:24:50.670263 systemd-logind[1612]: Removed session 17. Jan 13 20:24:55.821641 systemd[1]: Started sshd@17-138.199.153.199:22-147.75.109.163:51348.service - OpenSSH per-connection server daemon (147.75.109.163:51348). Jan 13 20:24:56.804348 sshd[4633]: Accepted publickey for core from 147.75.109.163 port 51348 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:24:56.806268 sshd-session[4633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:24:56.811022 systemd-logind[1612]: New session 18 of user core. Jan 13 20:24:56.818872 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:24:57.554447 sshd[4636]: Connection closed by 147.75.109.163 port 51348 Jan 13 20:24:57.555278 sshd-session[4633]: pam_unix(sshd:session): session closed for user core Jan 13 20:24:57.560440 systemd[1]: sshd@17-138.199.153.199:22-147.75.109.163:51348.service: Deactivated successfully. Jan 13 20:24:57.564870 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:24:57.565841 systemd-logind[1612]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:24:57.566968 systemd-logind[1612]: Removed session 18. Jan 13 20:25:02.723745 systemd[1]: Started sshd@18-138.199.153.199:22-147.75.109.163:32968.service - OpenSSH per-connection server daemon (147.75.109.163:32968). Jan 13 20:25:03.706476 sshd[4647]: Accepted publickey for core from 147.75.109.163 port 32968 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:03.708475 sshd-session[4647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:03.713937 systemd-logind[1612]: New session 19 of user core. Jan 13 20:25:03.721737 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:25:04.464235 sshd[4650]: Connection closed by 147.75.109.163 port 32968 Jan 13 20:25:04.465510 sshd-session[4647]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:04.469349 systemd[1]: sshd@18-138.199.153.199:22-147.75.109.163:32968.service: Deactivated successfully. Jan 13 20:25:04.473853 systemd-logind[1612]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:25:04.475057 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:25:04.478060 systemd-logind[1612]: Removed session 19. Jan 13 20:25:04.632768 systemd[1]: Started sshd@19-138.199.153.199:22-147.75.109.163:32982.service - OpenSSH per-connection server daemon (147.75.109.163:32982). Jan 13 20:25:05.616338 sshd[4660]: Accepted publickey for core from 147.75.109.163 port 32982 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:05.619356 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:05.626753 systemd-logind[1612]: New session 20 of user core. Jan 13 20:25:05.633889 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:25:08.137761 kubelet[3079]: I0113 20:25:08.137709 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5dxmf" podStartSLOduration=331.13765333 podStartE2EDuration="5m31.13765333s" podCreationTimestamp="2025-01-13 20:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:20:09.029142525 +0000 UTC m=+44.417536603" watchObservedRunningTime="2025-01-13 20:25:08.13765333 +0000 UTC m=+343.526047408" Jan 13 20:25:08.150703 containerd[1640]: time="2025-01-13T20:25:08.150635962Z" level=info msg="StopContainer for \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\" with timeout 30 (s)" Jan 13 20:25:08.154879 containerd[1640]: time="2025-01-13T20:25:08.154735491Z" level=info msg="Stop container \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\" with signal terminated" Jan 13 20:25:08.173802 systemd[1]: run-containerd-runc-k8s.io-e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b-runc.tNnKvV.mount: Deactivated successfully. Jan 13 20:25:08.194400 containerd[1640]: time="2025-01-13T20:25:08.193892591Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:25:08.203856 containerd[1640]: time="2025-01-13T20:25:08.203678626Z" level=info msg="StopContainer for \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\" with timeout 2 (s)" Jan 13 20:25:08.205496 containerd[1640]: time="2025-01-13T20:25:08.205425686Z" level=info msg="Stop container \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\" with signal terminated" Jan 13 20:25:08.214659 systemd-networkd[1242]: lxc_health: Link DOWN Jan 13 20:25:08.214669 systemd-networkd[1242]: lxc_health: Lost carrier Jan 13 20:25:08.232119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245-rootfs.mount: Deactivated successfully. Jan 13 20:25:08.247560 containerd[1640]: time="2025-01-13T20:25:08.247368699Z" level=info msg="shim disconnected" id=335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245 namespace=k8s.io Jan 13 20:25:08.247560 containerd[1640]: time="2025-01-13T20:25:08.247426420Z" level=warning msg="cleaning up after shim disconnected" id=335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245 namespace=k8s.io Jan 13 20:25:08.247560 containerd[1640]: time="2025-01-13T20:25:08.247434300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:08.273816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b-rootfs.mount: Deactivated successfully. Jan 13 20:25:08.281421 containerd[1640]: time="2025-01-13T20:25:08.280658091Z" level=info msg="shim disconnected" id=e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b namespace=k8s.io Jan 13 20:25:08.281784 containerd[1640]: time="2025-01-13T20:25:08.281431100Z" level=warning msg="cleaning up after shim disconnected" id=e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b namespace=k8s.io Jan 13 20:25:08.281784 containerd[1640]: time="2025-01-13T20:25:08.281459100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:08.281784 containerd[1640]: time="2025-01-13T20:25:08.280881293Z" level=info msg="StopContainer for \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\" returns successfully" Jan 13 20:25:08.282680 containerd[1640]: time="2025-01-13T20:25:08.282640954Z" level=info msg="StopPodSandbox for \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\"" Jan 13 20:25:08.282928 containerd[1640]: time="2025-01-13T20:25:08.282816036Z" level=info msg="Container to stop \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:08.290985 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e-shm.mount: Deactivated successfully. Jan 13 20:25:08.313753 containerd[1640]: time="2025-01-13T20:25:08.313568757Z" level=info msg="StopContainer for \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\" returns successfully" Jan 13 20:25:08.314582 containerd[1640]: time="2025-01-13T20:25:08.314406647Z" level=info msg="StopPodSandbox for \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\"" Jan 13 20:25:08.314582 containerd[1640]: time="2025-01-13T20:25:08.314444248Z" level=info msg="Container to stop \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:08.314582 containerd[1640]: time="2025-01-13T20:25:08.314456088Z" level=info msg="Container to stop \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:08.314582 containerd[1640]: time="2025-01-13T20:25:08.314465008Z" level=info msg="Container to stop \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:08.315121 containerd[1640]: time="2025-01-13T20:25:08.314473248Z" level=info msg="Container to stop \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:08.315121 containerd[1640]: time="2025-01-13T20:25:08.315075575Z" level=info msg="Container to stop \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:25:08.337971 containerd[1640]: time="2025-01-13T20:25:08.337840443Z" level=info msg="shim disconnected" id=78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e namespace=k8s.io Jan 13 20:25:08.337971 containerd[1640]: time="2025-01-13T20:25:08.337903763Z" level=warning msg="cleaning up after shim disconnected" id=78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e namespace=k8s.io Jan 13 20:25:08.337971 containerd[1640]: time="2025-01-13T20:25:08.337912483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:08.358420 containerd[1640]: time="2025-01-13T20:25:08.358198802Z" level=info msg="shim disconnected" id=b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e namespace=k8s.io Jan 13 20:25:08.358420 containerd[1640]: time="2025-01-13T20:25:08.358258923Z" level=warning msg="cleaning up after shim disconnected" id=b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e namespace=k8s.io Jan 13 20:25:08.358420 containerd[1640]: time="2025-01-13T20:25:08.358266563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:08.362011 containerd[1640]: time="2025-01-13T20:25:08.361974806Z" level=info msg="TearDown network for sandbox \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\" successfully" Jan 13 20:25:08.362154 containerd[1640]: time="2025-01-13T20:25:08.362140408Z" level=info msg="StopPodSandbox for \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\" returns successfully" Jan 13 20:25:08.375448 containerd[1640]: time="2025-01-13T20:25:08.375215962Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:25:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:25:08.378254 containerd[1640]: time="2025-01-13T20:25:08.378214037Z" level=info msg="TearDown network for sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" successfully" Jan 13 20:25:08.378519 containerd[1640]: time="2025-01-13T20:25:08.378427280Z" level=info msg="StopPodSandbox for \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" returns successfully" Jan 13 20:25:08.474827 kubelet[3079]: I0113 20:25:08.474577 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71db5af9-23d1-4da9-a2d3-888d7e0ee85e-cilium-config-path\") pod \"71db5af9-23d1-4da9-a2d3-888d7e0ee85e\" (UID: \"71db5af9-23d1-4da9-a2d3-888d7e0ee85e\") " Jan 13 20:25:08.474827 kubelet[3079]: I0113 20:25:08.474629 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-cgroup\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.474827 kubelet[3079]: I0113 20:25:08.474652 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-config-path\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.474827 kubelet[3079]: I0113 20:25:08.474672 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d52cfc35-c25c-44c0-9016-71d43cacf0f3-hubble-tls\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.474827 kubelet[3079]: I0113 20:25:08.474705 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-lib-modules\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.474827 kubelet[3079]: I0113 20:25:08.474727 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-run\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.475397 kubelet[3079]: I0113 20:25:08.474745 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-hostproc\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.475397 kubelet[3079]: I0113 20:25:08.474763 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-host-proc-sys-kernel\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.476583 kubelet[3079]: I0113 20:25:08.475662 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-bpf-maps\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.476583 kubelet[3079]: I0113 20:25:08.475751 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-host-proc-sys-net\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.476583 kubelet[3079]: I0113 20:25:08.475815 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqm6t\" (UniqueName: \"kubernetes.io/projected/71db5af9-23d1-4da9-a2d3-888d7e0ee85e-kube-api-access-bqm6t\") pod \"71db5af9-23d1-4da9-a2d3-888d7e0ee85e\" (UID: \"71db5af9-23d1-4da9-a2d3-888d7e0ee85e\") " Jan 13 20:25:08.476583 kubelet[3079]: I0113 20:25:08.475880 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d52cfc35-c25c-44c0-9016-71d43cacf0f3-clustermesh-secrets\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.476583 kubelet[3079]: I0113 20:25:08.475927 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cni-path\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.476583 kubelet[3079]: I0113 20:25:08.475969 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-etc-cni-netd\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.477259 kubelet[3079]: I0113 20:25:08.476032 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-xtables-lock\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.477259 kubelet[3079]: I0113 20:25:08.476080 3079 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6n4l\" (UniqueName: \"kubernetes.io/projected/d52cfc35-c25c-44c0-9016-71d43cacf0f3-kube-api-access-h6n4l\") pod \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\" (UID: \"d52cfc35-c25c-44c0-9016-71d43cacf0f3\") " Jan 13 20:25:08.482400 kubelet[3079]: I0113 20:25:08.482070 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482526 kubelet[3079]: I0113 20:25:08.482448 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52cfc35-c25c-44c0-9016-71d43cacf0f3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:08.482526 kubelet[3079]: I0113 20:25:08.482495 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482526 kubelet[3079]: I0113 20:25:08.482514 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482526 kubelet[3079]: I0113 20:25:08.482532 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482634 kubelet[3079]: I0113 20:25:08.482549 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-hostproc" (OuterVolumeSpecName: "hostproc") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482634 kubelet[3079]: I0113 20:25:08.482564 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482634 kubelet[3079]: I0113 20:25:08.482582 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482634 kubelet[3079]: I0113 20:25:08.482598 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cni-path" (OuterVolumeSpecName: "cni-path") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482765 kubelet[3079]: I0113 20:25:08.482728 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.482765 kubelet[3079]: I0113 20:25:08.482752 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:25:08.485336 kubelet[3079]: I0113 20:25:08.485022 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d52cfc35-c25c-44c0-9016-71d43cacf0f3-kube-api-access-h6n4l" (OuterVolumeSpecName: "kube-api-access-h6n4l") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "kube-api-access-h6n4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:08.486016 kubelet[3079]: I0113 20:25:08.485912 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71db5af9-23d1-4da9-a2d3-888d7e0ee85e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "71db5af9-23d1-4da9-a2d3-888d7e0ee85e" (UID: "71db5af9-23d1-4da9-a2d3-888d7e0ee85e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:25:08.487804 kubelet[3079]: I0113 20:25:08.487738 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:25:08.488522 kubelet[3079]: I0113 20:25:08.488489 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71db5af9-23d1-4da9-a2d3-888d7e0ee85e-kube-api-access-bqm6t" (OuterVolumeSpecName: "kube-api-access-bqm6t") pod "71db5af9-23d1-4da9-a2d3-888d7e0ee85e" (UID: "71db5af9-23d1-4da9-a2d3-888d7e0ee85e"). InnerVolumeSpecName "kube-api-access-bqm6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:25:08.488659 kubelet[3079]: I0113 20:25:08.488635 3079 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d52cfc35-c25c-44c0-9016-71d43cacf0f3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d52cfc35-c25c-44c0-9016-71d43cacf0f3" (UID: "d52cfc35-c25c-44c0-9016-71d43cacf0f3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:25:08.577217 kubelet[3079]: I0113 20:25:08.577171 3079 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-hostproc\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577217 kubelet[3079]: I0113 20:25:08.577221 3079 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-host-proc-sys-kernel\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577217 kubelet[3079]: I0113 20:25:08.577238 3079 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-bpf-maps\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577534 kubelet[3079]: I0113 20:25:08.577255 3079 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-host-proc-sys-net\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577534 kubelet[3079]: I0113 20:25:08.577272 3079 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqm6t\" (UniqueName: \"kubernetes.io/projected/71db5af9-23d1-4da9-a2d3-888d7e0ee85e-kube-api-access-bqm6t\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577534 kubelet[3079]: I0113 20:25:08.577287 3079 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d52cfc35-c25c-44c0-9016-71d43cacf0f3-clustermesh-secrets\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577534 kubelet[3079]: I0113 20:25:08.577325 3079 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cni-path\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577534 kubelet[3079]: I0113 20:25:08.577344 3079 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-etc-cni-netd\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577534 kubelet[3079]: I0113 20:25:08.577360 3079 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-xtables-lock\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577534 kubelet[3079]: I0113 20:25:08.577375 3079 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h6n4l\" (UniqueName: \"kubernetes.io/projected/d52cfc35-c25c-44c0-9016-71d43cacf0f3-kube-api-access-h6n4l\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577534 kubelet[3079]: I0113 20:25:08.577389 3079 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-cgroup\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577836 kubelet[3079]: I0113 20:25:08.577404 3079 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-config-path\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577836 kubelet[3079]: I0113 20:25:08.577417 3079 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d52cfc35-c25c-44c0-9016-71d43cacf0f3-hubble-tls\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577836 kubelet[3079]: I0113 20:25:08.577431 3079 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-lib-modules\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577836 kubelet[3079]: I0113 20:25:08.577444 3079 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d52cfc35-c25c-44c0-9016-71d43cacf0f3-cilium-run\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.577836 kubelet[3079]: I0113 20:25:08.577458 3079 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71db5af9-23d1-4da9-a2d3-888d7e0ee85e-cilium-config-path\") on node \"ci-4152-2-0-9-7c8f4a1e31\" DevicePath \"\"" Jan 13 20:25:08.739919 kubelet[3079]: I0113 20:25:08.738711 3079 scope.go:117] "RemoveContainer" containerID="335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245" Jan 13 20:25:08.744653 containerd[1640]: time="2025-01-13T20:25:08.744547903Z" level=info msg="RemoveContainer for \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\"" Jan 13 20:25:08.755020 containerd[1640]: time="2025-01-13T20:25:08.754967225Z" level=info msg="RemoveContainer for \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\" returns successfully" Jan 13 20:25:08.756224 kubelet[3079]: I0113 20:25:08.756035 3079 scope.go:117] "RemoveContainer" containerID="335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245" Jan 13 20:25:08.757085 containerd[1640]: time="2025-01-13T20:25:08.756958889Z" level=error msg="ContainerStatus for \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\": not found" Jan 13 20:25:08.757282 kubelet[3079]: E0113 20:25:08.757225 3079 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\": not found" containerID="335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245" Jan 13 20:25:08.758881 kubelet[3079]: I0113 20:25:08.758224 3079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245"} err="failed to get container status \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\": rpc error: code = NotFound desc = an error occurred when try to find container \"335e47acc5059c9cd467e32399580e6d1e576eec8b46d88a91859d70e0002245\": not found" Jan 13 20:25:08.758881 kubelet[3079]: I0113 20:25:08.758269 3079 scope.go:117] "RemoveContainer" containerID="e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b" Jan 13 20:25:08.763842 containerd[1640]: time="2025-01-13T20:25:08.762555154Z" level=info msg="RemoveContainer for \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\"" Jan 13 20:25:08.774341 containerd[1640]: time="2025-01-13T20:25:08.771909904Z" level=info msg="RemoveContainer for \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\" returns successfully" Jan 13 20:25:08.774483 kubelet[3079]: I0113 20:25:08.773784 3079 scope.go:117] "RemoveContainer" containerID="7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026" Jan 13 20:25:08.778136 containerd[1640]: time="2025-01-13T20:25:08.778092217Z" level=info msg="RemoveContainer for \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\"" Jan 13 20:25:08.783640 containerd[1640]: time="2025-01-13T20:25:08.783593722Z" level=info msg="RemoveContainer for \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\" returns successfully" Jan 13 20:25:08.784175 kubelet[3079]: I0113 20:25:08.784111 3079 scope.go:117] "RemoveContainer" containerID="6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61" Jan 13 20:25:08.786052 containerd[1640]: time="2025-01-13T20:25:08.786011190Z" level=info msg="RemoveContainer for \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\"" Jan 13 20:25:08.790850 containerd[1640]: time="2025-01-13T20:25:08.790683325Z" level=info msg="RemoveContainer for \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\" returns successfully" Jan 13 20:25:08.791755 kubelet[3079]: I0113 20:25:08.791521 3079 scope.go:117] "RemoveContainer" containerID="efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530" Jan 13 20:25:08.793908 containerd[1640]: time="2025-01-13T20:25:08.793868242Z" level=info msg="RemoveContainer for \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\"" Jan 13 20:25:08.800189 containerd[1640]: time="2025-01-13T20:25:08.800112836Z" level=info msg="RemoveContainer for \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\" returns successfully" Jan 13 20:25:08.800490 kubelet[3079]: I0113 20:25:08.800374 3079 scope.go:117] "RemoveContainer" containerID="115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590" Jan 13 20:25:08.802229 containerd[1640]: time="2025-01-13T20:25:08.802094059Z" level=info msg="RemoveContainer for \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\"" Jan 13 20:25:08.807336 containerd[1640]: time="2025-01-13T20:25:08.806755194Z" level=info msg="RemoveContainer for \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\" returns successfully" Jan 13 20:25:08.808253 kubelet[3079]: I0113 20:25:08.808222 3079 scope.go:117] "RemoveContainer" containerID="e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b" Jan 13 20:25:08.809745 containerd[1640]: time="2025-01-13T20:25:08.809647908Z" level=error msg="ContainerStatus for \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\": not found" Jan 13 20:25:08.809925 kubelet[3079]: E0113 20:25:08.809901 3079 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\": not found" containerID="e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b" Jan 13 20:25:08.809970 kubelet[3079]: I0113 20:25:08.809948 3079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b"} err="failed to get container status \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\": rpc error: code = NotFound desc = an error occurred when try to find container \"e3686f5dbd40e578b933b75a933d594c459017e9027d94ab26999a42ffcdc32b\": not found" Jan 13 20:25:08.809970 kubelet[3079]: I0113 20:25:08.809965 3079 scope.go:117] "RemoveContainer" containerID="7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026" Jan 13 20:25:08.810350 containerd[1640]: time="2025-01-13T20:25:08.810232155Z" level=error msg="ContainerStatus for \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\": not found" Jan 13 20:25:08.810505 kubelet[3079]: E0113 20:25:08.810408 3079 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\": not found" containerID="7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026" Jan 13 20:25:08.810505 kubelet[3079]: I0113 20:25:08.810503 3079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026"} err="failed to get container status \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a82564e1c8d162a9148cb0efc57c0824a23715237059e450257dfc3210f1026\": not found" Jan 13 20:25:08.810587 kubelet[3079]: I0113 20:25:08.810518 3079 scope.go:117] "RemoveContainer" containerID="6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61" Jan 13 20:25:08.811314 containerd[1640]: time="2025-01-13T20:25:08.810712880Z" level=error msg="ContainerStatus for \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\": not found" Jan 13 20:25:08.811392 kubelet[3079]: E0113 20:25:08.811175 3079 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\": not found" containerID="6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61" Jan 13 20:25:08.811392 kubelet[3079]: I0113 20:25:08.811209 3079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61"} err="failed to get container status \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\": rpc error: code = NotFound desc = an error occurred when try to find container \"6f887afdc557281a2c620f2c2db6260f80dc19f4ffc8843c1e35b34ec8fccd61\": not found" Jan 13 20:25:08.811392 kubelet[3079]: I0113 20:25:08.811219 3079 scope.go:117] "RemoveContainer" containerID="efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530" Jan 13 20:25:08.812591 containerd[1640]: time="2025-01-13T20:25:08.812549822Z" level=error msg="ContainerStatus for \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\": not found" Jan 13 20:25:08.812885 kubelet[3079]: E0113 20:25:08.812839 3079 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\": not found" containerID="efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530" Jan 13 20:25:08.812885 kubelet[3079]: I0113 20:25:08.812876 3079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530"} err="failed to get container status \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\": rpc error: code = NotFound desc = an error occurred when try to find container \"efc929c74c9368b012c514f2e2c65f0b42d975089b66c98ab72b7d4af8b19530\": not found" Jan 13 20:25:08.812885 kubelet[3079]: I0113 20:25:08.812889 3079 scope.go:117] "RemoveContainer" containerID="115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590" Jan 13 20:25:08.813224 containerd[1640]: time="2025-01-13T20:25:08.813158709Z" level=error msg="ContainerStatus for \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\": not found" Jan 13 20:25:08.813360 kubelet[3079]: E0113 20:25:08.813278 3079 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\": not found" containerID="115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590" Jan 13 20:25:08.813360 kubelet[3079]: I0113 20:25:08.813319 3079 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590"} err="failed to get container status \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\": rpc error: code = NotFound desc = an error occurred when try to find container \"115aa18c3b158e1636b0889198c51b7d9e3ff90c6dbfa7bbd89781004aff4590\": not found" Jan 13 20:25:09.163796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e-rootfs.mount: Deactivated successfully. Jan 13 20:25:09.163968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e-rootfs.mount: Deactivated successfully. Jan 13 20:25:09.164074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e-shm.mount: Deactivated successfully. Jan 13 20:25:09.164184 systemd[1]: var-lib-kubelet-pods-71db5af9\x2d23d1\x2d4da9\x2da2d3\x2d888d7e0ee85e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbqm6t.mount: Deactivated successfully. Jan 13 20:25:09.164278 systemd[1]: var-lib-kubelet-pods-d52cfc35\x2dc25c\x2d44c0\x2d9016\x2d71d43cacf0f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh6n4l.mount: Deactivated successfully. Jan 13 20:25:09.164391 systemd[1]: var-lib-kubelet-pods-d52cfc35\x2dc25c\x2d44c0\x2d9016\x2d71d43cacf0f3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:25:09.164487 systemd[1]: var-lib-kubelet-pods-d52cfc35\x2dc25c\x2d44c0\x2d9016\x2d71d43cacf0f3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:25:09.982080 kubelet[3079]: E0113 20:25:09.982026 3079 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:25:10.243526 sshd[4663]: Connection closed by 147.75.109.163 port 32982 Jan 13 20:25:10.244817 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:10.249071 systemd-logind[1612]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:25:10.249134 systemd[1]: sshd@19-138.199.153.199:22-147.75.109.163:32982.service: Deactivated successfully. Jan 13 20:25:10.256046 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:25:10.257858 systemd-logind[1612]: Removed session 20. Jan 13 20:25:10.409601 systemd[1]: Started sshd@20-138.199.153.199:22-147.75.109.163:43266.service - OpenSSH per-connection server daemon (147.75.109.163:43266). Jan 13 20:25:10.751634 kubelet[3079]: I0113 20:25:10.751530 3079 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="71db5af9-23d1-4da9-a2d3-888d7e0ee85e" path="/var/lib/kubelet/pods/71db5af9-23d1-4da9-a2d3-888d7e0ee85e/volumes" Jan 13 20:25:10.752477 kubelet[3079]: I0113 20:25:10.752268 3079 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d52cfc35-c25c-44c0-9016-71d43cacf0f3" path="/var/lib/kubelet/pods/d52cfc35-c25c-44c0-9016-71d43cacf0f3/volumes" Jan 13 20:25:11.399434 sshd[4834]: Accepted publickey for core from 147.75.109.163 port 43266 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:11.402055 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:11.410179 systemd-logind[1612]: New session 21 of user core. Jan 13 20:25:11.417804 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 20:25:12.128347 kubelet[3079]: I0113 20:25:12.127673 3079 setters.go:568] "Node became not ready" node="ci-4152-2-0-9-7c8f4a1e31" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:25:12Z","lastTransitionTime":"2025-01-13T20:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:25:12.590399 kubelet[3079]: I0113 20:25:12.589404 3079 topology_manager.go:215] "Topology Admit Handler" podUID="16363c9f-055e-4b58-ae16-cceb845938fa" podNamespace="kube-system" podName="cilium-ccdpf" Jan 13 20:25:12.590676 kubelet[3079]: E0113 20:25:12.590575 3079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d52cfc35-c25c-44c0-9016-71d43cacf0f3" containerName="mount-bpf-fs" Jan 13 20:25:12.590676 kubelet[3079]: E0113 20:25:12.590610 3079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="71db5af9-23d1-4da9-a2d3-888d7e0ee85e" containerName="cilium-operator" Jan 13 20:25:12.590676 kubelet[3079]: E0113 20:25:12.590620 3079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d52cfc35-c25c-44c0-9016-71d43cacf0f3" containerName="clean-cilium-state" Jan 13 20:25:12.590676 kubelet[3079]: E0113 20:25:12.590628 3079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d52cfc35-c25c-44c0-9016-71d43cacf0f3" containerName="apply-sysctl-overwrites" Jan 13 20:25:12.590676 kubelet[3079]: E0113 20:25:12.590634 3079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d52cfc35-c25c-44c0-9016-71d43cacf0f3" containerName="mount-cgroup" Jan 13 20:25:12.590676 kubelet[3079]: E0113 20:25:12.590641 3079 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d52cfc35-c25c-44c0-9016-71d43cacf0f3" containerName="cilium-agent" Jan 13 20:25:12.590676 kubelet[3079]: I0113 20:25:12.590664 3079 memory_manager.go:354] "RemoveStaleState removing state" podUID="d52cfc35-c25c-44c0-9016-71d43cacf0f3" containerName="cilium-agent" Jan 13 20:25:12.592048 kubelet[3079]: I0113 20:25:12.591611 3079 memory_manager.go:354] "RemoveStaleState removing state" podUID="71db5af9-23d1-4da9-a2d3-888d7e0ee85e" containerName="cilium-operator" Jan 13 20:25:12.704090 kubelet[3079]: I0113 20:25:12.704000 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-cilium-run\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.704509 kubelet[3079]: I0113 20:25:12.704331 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-cilium-cgroup\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.704509 kubelet[3079]: I0113 20:25:12.704369 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-cni-path\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.704509 kubelet[3079]: I0113 20:25:12.704401 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/16363c9f-055e-4b58-ae16-cceb845938fa-cilium-ipsec-secrets\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.704509 kubelet[3079]: I0113 20:25:12.704437 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-bpf-maps\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.704509 kubelet[3079]: I0113 20:25:12.704463 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-lib-modules\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.705227 kubelet[3079]: I0113 20:25:12.704489 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-host-proc-sys-net\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.705373 kubelet[3079]: I0113 20:25:12.705357 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-host-proc-sys-kernel\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.705499 kubelet[3079]: I0113 20:25:12.705484 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16363c9f-055e-4b58-ae16-cceb845938fa-hubble-tls\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.708323 kubelet[3079]: I0113 20:25:12.706254 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16363c9f-055e-4b58-ae16-cceb845938fa-clustermesh-secrets\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.708323 kubelet[3079]: I0113 20:25:12.706326 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4r6x\" (UniqueName: \"kubernetes.io/projected/16363c9f-055e-4b58-ae16-cceb845938fa-kube-api-access-m4r6x\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.708323 kubelet[3079]: I0113 20:25:12.706358 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-hostproc\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.708323 kubelet[3079]: I0113 20:25:12.706392 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16363c9f-055e-4b58-ae16-cceb845938fa-cilium-config-path\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.708323 kubelet[3079]: I0113 20:25:12.706428 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-etc-cni-netd\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.708323 kubelet[3079]: I0113 20:25:12.706459 3079 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16363c9f-055e-4b58-ae16-cceb845938fa-xtables-lock\") pod \"cilium-ccdpf\" (UID: \"16363c9f-055e-4b58-ae16-cceb845938fa\") " pod="kube-system/cilium-ccdpf" Jan 13 20:25:12.771752 sshd[4837]: Connection closed by 147.75.109.163 port 43266 Jan 13 20:25:12.772656 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:12.779210 systemd[1]: sshd@20-138.199.153.199:22-147.75.109.163:43266.service: Deactivated successfully. Jan 13 20:25:12.779638 systemd-logind[1612]: Session 21 logged out. Waiting for processes to exit. Jan 13 20:25:12.784772 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 20:25:12.786517 systemd-logind[1612]: Removed session 21. Jan 13 20:25:12.903947 containerd[1640]: time="2025-01-13T20:25:12.903472675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ccdpf,Uid:16363c9f-055e-4b58-ae16-cceb845938fa,Namespace:kube-system,Attempt:0,}" Jan 13 20:25:12.928855 containerd[1640]: time="2025-01-13T20:25:12.928620724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:25:12.929143 containerd[1640]: time="2025-01-13T20:25:12.929013449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:25:12.929143 containerd[1640]: time="2025-01-13T20:25:12.929082369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:12.929875 containerd[1640]: time="2025-01-13T20:25:12.929759737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:25:12.940043 systemd[1]: Started sshd@21-138.199.153.199:22-147.75.109.163:43278.service - OpenSSH per-connection server daemon (147.75.109.163:43278). Jan 13 20:25:12.973705 containerd[1640]: time="2025-01-13T20:25:12.973640602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ccdpf,Uid:16363c9f-055e-4b58-ae16-cceb845938fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\"" Jan 13 20:25:12.978524 containerd[1640]: time="2025-01-13T20:25:12.978484378Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:25:12.990506 containerd[1640]: time="2025-01-13T20:25:12.990382115Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"55e42297a81d6dcd303778f47b663868ffc87029a479923ac8f4d158dfa661e4\"" Jan 13 20:25:12.992721 containerd[1640]: time="2025-01-13T20:25:12.992640781Z" level=info msg="StartContainer for \"55e42297a81d6dcd303778f47b663868ffc87029a479923ac8f4d158dfa661e4\"" Jan 13 20:25:13.060230 containerd[1640]: time="2025-01-13T20:25:13.060158474Z" level=info msg="StartContainer for \"55e42297a81d6dcd303778f47b663868ffc87029a479923ac8f4d158dfa661e4\" returns successfully" Jan 13 20:25:13.110093 containerd[1640]: time="2025-01-13T20:25:13.109900284Z" level=info msg="shim disconnected" id=55e42297a81d6dcd303778f47b663868ffc87029a479923ac8f4d158dfa661e4 namespace=k8s.io Jan 13 20:25:13.110093 containerd[1640]: time="2025-01-13T20:25:13.109985285Z" level=warning msg="cleaning up after shim disconnected" id=55e42297a81d6dcd303778f47b663868ffc87029a479923ac8f4d158dfa661e4 namespace=k8s.io Jan 13 20:25:13.110093 containerd[1640]: time="2025-01-13T20:25:13.109997405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:13.773537 containerd[1640]: time="2025-01-13T20:25:13.773181878Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:25:13.786140 containerd[1640]: time="2025-01-13T20:25:13.785988945Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"73feea0bb0b900b4bf71520ab4a3992c28c138abc5029d7683bfced5f3a8e06a\"" Jan 13 20:25:13.792009 containerd[1640]: time="2025-01-13T20:25:13.789481305Z" level=info msg="StartContainer for \"73feea0bb0b900b4bf71520ab4a3992c28c138abc5029d7683bfced5f3a8e06a\"" Jan 13 20:25:13.855894 containerd[1640]: time="2025-01-13T20:25:13.855835424Z" level=info msg="StartContainer for \"73feea0bb0b900b4bf71520ab4a3992c28c138abc5029d7683bfced5f3a8e06a\" returns successfully" Jan 13 20:25:13.888537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73feea0bb0b900b4bf71520ab4a3992c28c138abc5029d7683bfced5f3a8e06a-rootfs.mount: Deactivated successfully. Jan 13 20:25:13.893241 containerd[1640]: time="2025-01-13T20:25:13.893134371Z" level=info msg="shim disconnected" id=73feea0bb0b900b4bf71520ab4a3992c28c138abc5029d7683bfced5f3a8e06a namespace=k8s.io Jan 13 20:25:13.893241 containerd[1640]: time="2025-01-13T20:25:13.893216452Z" level=warning msg="cleaning up after shim disconnected" id=73feea0bb0b900b4bf71520ab4a3992c28c138abc5029d7683bfced5f3a8e06a namespace=k8s.io Jan 13 20:25:13.893518 containerd[1640]: time="2025-01-13T20:25:13.893225572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:13.934492 sshd[4874]: Accepted publickey for core from 147.75.109.163 port 43278 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:13.936394 sshd-session[4874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:13.942911 systemd-logind[1612]: New session 22 of user core. Jan 13 20:25:13.948031 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 20:25:14.614360 sshd[5015]: Connection closed by 147.75.109.163 port 43278 Jan 13 20:25:14.615581 sshd-session[4874]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:14.619759 systemd-logind[1612]: Session 22 logged out. Waiting for processes to exit. Jan 13 20:25:14.620630 systemd[1]: sshd@21-138.199.153.199:22-147.75.109.163:43278.service: Deactivated successfully. Jan 13 20:25:14.626898 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 20:25:14.628274 systemd-logind[1612]: Removed session 22. Jan 13 20:25:14.779837 systemd[1]: Started sshd@22-138.199.153.199:22-147.75.109.163:43294.service - OpenSSH per-connection server daemon (147.75.109.163:43294). Jan 13 20:25:14.782916 containerd[1640]: time="2025-01-13T20:25:14.782848951Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:25:14.817342 containerd[1640]: time="2025-01-13T20:25:14.817243063Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"df5080c643ceb0550776c9de34f593048f407b8d9e8f038df2f5e149748b371f\"" Jan 13 20:25:14.818027 containerd[1640]: time="2025-01-13T20:25:14.817833750Z" level=info msg="StartContainer for \"df5080c643ceb0550776c9de34f593048f407b8d9e8f038df2f5e149748b371f\"" Jan 13 20:25:14.886024 containerd[1640]: time="2025-01-13T20:25:14.885734963Z" level=info msg="StartContainer for \"df5080c643ceb0550776c9de34f593048f407b8d9e8f038df2f5e149748b371f\" returns successfully" Jan 13 20:25:14.919003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df5080c643ceb0550776c9de34f593048f407b8d9e8f038df2f5e149748b371f-rootfs.mount: Deactivated successfully. Jan 13 20:25:14.924358 containerd[1640]: time="2025-01-13T20:25:14.924136800Z" level=info msg="shim disconnected" id=df5080c643ceb0550776c9de34f593048f407b8d9e8f038df2f5e149748b371f namespace=k8s.io Jan 13 20:25:14.924358 containerd[1640]: time="2025-01-13T20:25:14.924194041Z" level=warning msg="cleaning up after shim disconnected" id=df5080c643ceb0550776c9de34f593048f407b8d9e8f038df2f5e149748b371f namespace=k8s.io Jan 13 20:25:14.924358 containerd[1640]: time="2025-01-13T20:25:14.924202321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:14.983628 kubelet[3079]: E0113 20:25:14.983542 3079 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:25:15.771153 sshd[5021]: Accepted publickey for core from 147.75.109.163 port 43294 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:25:15.773889 sshd-session[5021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:25:15.781330 systemd-logind[1612]: New session 23 of user core. Jan 13 20:25:15.786832 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 20:25:15.799449 containerd[1640]: time="2025-01-13T20:25:15.799394762Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:25:15.824221 containerd[1640]: time="2025-01-13T20:25:15.824134363Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8a5bd82d815e12fc6c5925f28ba3cb02d8b090671684ffbf661681a9f9260aba\"" Jan 13 20:25:15.827157 containerd[1640]: time="2025-01-13T20:25:15.825410217Z" level=info msg="StartContainer for \"8a5bd82d815e12fc6c5925f28ba3cb02d8b090671684ffbf661681a9f9260aba\"" Jan 13 20:25:15.891933 containerd[1640]: time="2025-01-13T20:25:15.891865050Z" level=info msg="StartContainer for \"8a5bd82d815e12fc6c5925f28ba3cb02d8b090671684ffbf661681a9f9260aba\" returns successfully" Jan 13 20:25:15.921376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a5bd82d815e12fc6c5925f28ba3cb02d8b090671684ffbf661681a9f9260aba-rootfs.mount: Deactivated successfully. Jan 13 20:25:15.928811 containerd[1640]: time="2025-01-13T20:25:15.928556186Z" level=info msg="shim disconnected" id=8a5bd82d815e12fc6c5925f28ba3cb02d8b090671684ffbf661681a9f9260aba namespace=k8s.io Jan 13 20:25:15.928811 containerd[1640]: time="2025-01-13T20:25:15.928781828Z" level=warning msg="cleaning up after shim disconnected" id=8a5bd82d815e12fc6c5925f28ba3cb02d8b090671684ffbf661681a9f9260aba namespace=k8s.io Jan 13 20:25:15.929079 containerd[1640]: time="2025-01-13T20:25:15.928810509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:25:16.800253 containerd[1640]: time="2025-01-13T20:25:16.800205815Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:25:16.820120 containerd[1640]: time="2025-01-13T20:25:16.820022759Z" level=info msg="CreateContainer within sandbox \"d3920c7b765e055a856f3ca75bf82045c8b109f4a17a4a69e16e31529430e4f0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7b2492d954cb26b98abfe53104a3b0743214232bef310c7965f3350812934c10\"" Jan 13 20:25:16.821864 containerd[1640]: time="2025-01-13T20:25:16.821800139Z" level=info msg="StartContainer for \"7b2492d954cb26b98abfe53104a3b0743214232bef310c7965f3350812934c10\"" Jan 13 20:25:16.885027 containerd[1640]: time="2025-01-13T20:25:16.884941491Z" level=info msg="StartContainer for \"7b2492d954cb26b98abfe53104a3b0743214232bef310c7965f3350812934c10\" returns successfully" Jan 13 20:25:17.209336 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:25:20.186453 systemd-networkd[1242]: lxc_health: Link UP Jan 13 20:25:20.197049 systemd-networkd[1242]: lxc_health: Gained carrier Jan 13 20:25:20.635151 systemd[1]: run-containerd-runc-k8s.io-7b2492d954cb26b98abfe53104a3b0743214232bef310c7965f3350812934c10-runc.E26VlR.mount: Deactivated successfully. Jan 13 20:25:20.932544 kubelet[3079]: I0113 20:25:20.931184 3079 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-ccdpf" podStartSLOduration=8.93114298 podStartE2EDuration="8.93114298s" podCreationTimestamp="2025-01-13 20:25:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:25:17.82152364 +0000 UTC m=+353.209917718" watchObservedRunningTime="2025-01-13 20:25:20.93114298 +0000 UTC m=+356.319537058" Jan 13 20:25:21.547636 systemd-networkd[1242]: lxc_health: Gained IPv6LL Jan 13 20:25:24.802295 containerd[1640]: time="2025-01-13T20:25:24.802214494Z" level=info msg="StopPodSandbox for \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\"" Jan 13 20:25:24.802750 containerd[1640]: time="2025-01-13T20:25:24.802368175Z" level=info msg="TearDown network for sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" successfully" Jan 13 20:25:24.802750 containerd[1640]: time="2025-01-13T20:25:24.802384735Z" level=info msg="StopPodSandbox for \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" returns successfully" Jan 13 20:25:24.805389 containerd[1640]: time="2025-01-13T20:25:24.802893181Z" level=info msg="RemovePodSandbox for \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\"" Jan 13 20:25:24.805389 containerd[1640]: time="2025-01-13T20:25:24.802934781Z" level=info msg="Forcibly stopping sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\"" Jan 13 20:25:24.805389 containerd[1640]: time="2025-01-13T20:25:24.802998262Z" level=info msg="TearDown network for sandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" successfully" Jan 13 20:25:24.810041 containerd[1640]: time="2025-01-13T20:25:24.809978778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:24.810179 containerd[1640]: time="2025-01-13T20:25:24.810064058Z" level=info msg="RemovePodSandbox \"b7982ead1c9ebe9b67015f370c44f8188f861cf64bb38d47827bd7fdcf6d0b5e\" returns successfully" Jan 13 20:25:24.810884 containerd[1640]: time="2025-01-13T20:25:24.810827547Z" level=info msg="StopPodSandbox for \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\"" Jan 13 20:25:24.811009 containerd[1640]: time="2025-01-13T20:25:24.810923148Z" level=info msg="TearDown network for sandbox \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\" successfully" Jan 13 20:25:24.811009 containerd[1640]: time="2025-01-13T20:25:24.810936148Z" level=info msg="StopPodSandbox for \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\" returns successfully" Jan 13 20:25:24.813337 containerd[1640]: time="2025-01-13T20:25:24.812763728Z" level=info msg="RemovePodSandbox for \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\"" Jan 13 20:25:24.813337 containerd[1640]: time="2025-01-13T20:25:24.812937010Z" level=info msg="Forcibly stopping sandbox \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\"" Jan 13 20:25:24.813337 containerd[1640]: time="2025-01-13T20:25:24.813004850Z" level=info msg="TearDown network for sandbox \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\" successfully" Jan 13 20:25:24.821232 containerd[1640]: time="2025-01-13T20:25:24.821162379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:25:24.821383 containerd[1640]: time="2025-01-13T20:25:24.821264540Z" level=info msg="RemovePodSandbox \"78edb416d937f6fa8d4053d0f6abec8dfcc373b56585bd9a65ea38c52daed72e\" returns successfully" Jan 13 20:25:27.325767 sshd[5082]: Connection closed by 147.75.109.163 port 43294 Jan 13 20:25:27.326330 sshd-session[5021]: pam_unix(sshd:session): session closed for user core Jan 13 20:25:27.331556 systemd[1]: sshd@22-138.199.153.199:22-147.75.109.163:43294.service: Deactivated successfully. Jan 13 20:25:27.337679 systemd-logind[1612]: Session 23 logged out. Waiting for processes to exit. Jan 13 20:25:27.339441 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 20:25:27.343766 systemd-logind[1612]: Removed session 23.