Dec 13 09:06:25.872786 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 09:06:25.872810 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 09:06:25.872821 kernel: KASLR enabled Dec 13 09:06:25.872827 kernel: efi: EFI v2.7 by EDK II Dec 13 09:06:25.872833 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Dec 13 09:06:25.872841 kernel: random: crng init done Dec 13 09:06:25.872850 kernel: ACPI: Early table checksum verification disabled Dec 13 09:06:25.872856 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 09:06:25.872862 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 09:06:25.872868 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872876 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872882 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872888 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872894 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872902 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872920 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872956 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872963 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 09:06:25.872970 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 09:06:25.872976 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 09:06:25.872982 kernel: NUMA: Failed to initialise from firmware Dec 13 09:06:25.872988 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 09:06:25.872995 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Dec 13 09:06:25.873001 kernel: Zone ranges: Dec 13 09:06:25.873008 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 09:06:25.873014 kernel: DMA32 empty Dec 13 09:06:25.873022 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 09:06:25.873028 kernel: Movable zone start for each node Dec 13 09:06:25.873034 kernel: Early memory node ranges Dec 13 09:06:25.873041 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 09:06:25.873047 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 09:06:25.873054 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 09:06:25.873060 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 09:06:25.873066 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 09:06:25.873072 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 09:06:25.873079 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 09:06:25.873085 kernel: psci: probing for conduit method from ACPI. Dec 13 09:06:25.873093 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 09:06:25.873099 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 09:06:25.873106 kernel: psci: Trusted OS migration not required Dec 13 09:06:25.873115 kernel: psci: SMC Calling Convention v1.1 Dec 13 09:06:25.873122 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 09:06:25.873129 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 09:06:25.873137 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 09:06:25.873144 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 09:06:25.873151 kernel: Detected PIPT I-cache on CPU0 Dec 13 09:06:25.873158 kernel: CPU features: detected: GIC system register CPU interface Dec 13 09:06:25.873164 kernel: CPU features: detected: Hardware dirty bit management Dec 13 09:06:25.873171 kernel: CPU features: detected: Spectre-v4 Dec 13 09:06:25.873178 kernel: CPU features: detected: Spectre-BHB Dec 13 09:06:25.873184 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 09:06:25.873191 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 09:06:25.873198 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 09:06:25.873204 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 09:06:25.873212 kernel: alternatives: applying boot alternatives Dec 13 09:06:25.873220 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 09:06:25.873228 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 09:06:25.873234 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 09:06:25.873241 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 09:06:25.873248 kernel: Fallback order for Node 0: 0 Dec 13 09:06:25.873254 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 09:06:25.873261 kernel: Policy zone: Normal Dec 13 09:06:25.873268 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 09:06:25.873275 kernel: software IO TLB: area num 2. Dec 13 09:06:25.873282 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 09:06:25.873290 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Dec 13 09:06:25.873297 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 09:06:25.873304 kernel: trace event string verifier disabled Dec 13 09:06:25.873310 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 09:06:25.873318 kernel: rcu: RCU event tracing is enabled. Dec 13 09:06:25.873577 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 09:06:25.873585 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 09:06:25.873592 kernel: Tracing variant of Tasks RCU enabled. Dec 13 09:06:25.873599 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 09:06:25.873606 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 09:06:25.873612 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 09:06:25.873626 kernel: GICv3: 256 SPIs implemented Dec 13 09:06:25.873634 kernel: GICv3: 0 Extended SPIs implemented Dec 13 09:06:25.873643 kernel: Root IRQ handler: gic_handle_irq Dec 13 09:06:25.873651 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 09:06:25.873658 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 09:06:25.873677 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 09:06:25.873685 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 09:06:25.873692 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 09:06:25.873699 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 09:06:25.873706 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 09:06:25.873714 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 09:06:25.873723 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 09:06:25.873730 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 09:06:25.873737 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 09:06:25.873744 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 09:06:25.873751 kernel: Console: colour dummy device 80x25 Dec 13 09:06:25.873758 kernel: ACPI: Core revision 20230628 Dec 13 09:06:25.873765 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 09:06:25.873773 kernel: pid_max: default: 32768 minimum: 301 Dec 13 09:06:25.873780 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 09:06:25.873787 kernel: landlock: Up and running. Dec 13 09:06:25.873795 kernel: SELinux: Initializing. Dec 13 09:06:25.873802 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 09:06:25.873809 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 09:06:25.873816 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:06:25.873824 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 09:06:25.873831 kernel: rcu: Hierarchical SRCU implementation. Dec 13 09:06:25.873838 kernel: rcu: Max phase no-delay instances is 400. Dec 13 09:06:25.873845 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 09:06:25.873852 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 09:06:25.873860 kernel: Remapping and enabling EFI services. Dec 13 09:06:25.873867 kernel: smp: Bringing up secondary CPUs ... Dec 13 09:06:25.873874 kernel: Detected PIPT I-cache on CPU1 Dec 13 09:06:25.873881 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 09:06:25.873888 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 09:06:25.873895 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 09:06:25.873902 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 09:06:25.874023 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 09:06:25.874031 kernel: SMP: Total of 2 processors activated. Dec 13 09:06:25.874038 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 09:06:25.874048 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 09:06:25.874056 kernel: CPU features: detected: Common not Private translations Dec 13 09:06:25.874068 kernel: CPU features: detected: CRC32 instructions Dec 13 09:06:25.874077 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 09:06:25.874084 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 09:06:25.874092 kernel: CPU features: detected: LSE atomic instructions Dec 13 09:06:25.874099 kernel: CPU features: detected: Privileged Access Never Dec 13 09:06:25.874106 kernel: CPU features: detected: RAS Extension Support Dec 13 09:06:25.874114 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 09:06:25.874123 kernel: CPU: All CPU(s) started at EL1 Dec 13 09:06:25.874130 kernel: alternatives: applying system-wide alternatives Dec 13 09:06:25.874137 kernel: devtmpfs: initialized Dec 13 09:06:25.874145 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 09:06:25.874152 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 09:06:25.874160 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 09:06:25.874167 kernel: SMBIOS 3.0.0 present. Dec 13 09:06:25.874176 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 09:06:25.874183 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 09:06:25.874191 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 09:06:25.874199 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 09:06:25.874206 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 09:06:25.874214 kernel: audit: initializing netlink subsys (disabled) Dec 13 09:06:25.874221 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Dec 13 09:06:25.874229 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 09:06:25.874236 kernel: cpuidle: using governor menu Dec 13 09:06:25.874245 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 09:06:25.874252 kernel: ASID allocator initialised with 32768 entries Dec 13 09:06:25.874260 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 09:06:25.874267 kernel: Serial: AMBA PL011 UART driver Dec 13 09:06:25.874275 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 09:06:25.874282 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 09:06:25.874289 kernel: Modules: 509040 pages in range for PLT usage Dec 13 09:06:25.874297 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 09:06:25.874304 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 09:06:25.874313 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 09:06:25.874321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 09:06:25.874328 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 09:06:25.874335 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 09:06:25.874343 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 09:06:25.874350 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 09:06:25.874358 kernel: ACPI: Added _OSI(Module Device) Dec 13 09:06:25.874365 kernel: ACPI: Added _OSI(Processor Device) Dec 13 09:06:25.874372 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 09:06:25.874381 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 09:06:25.874388 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 09:06:25.874395 kernel: ACPI: Interpreter enabled Dec 13 09:06:25.874403 kernel: ACPI: Using GIC for interrupt routing Dec 13 09:06:25.874410 kernel: ACPI: MCFG table detected, 1 entries Dec 13 09:06:25.874417 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 09:06:25.874425 kernel: printk: console [ttyAMA0] enabled Dec 13 09:06:25.874432 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 09:06:25.874586 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 09:06:25.874705 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 09:06:25.874787 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 09:06:25.874855 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 09:06:25.875352 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 09:06:25.875370 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 09:06:25.875378 kernel: PCI host bridge to bus 0000:00 Dec 13 09:06:25.875467 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 09:06:25.875536 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 09:06:25.875594 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 09:06:25.875653 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 09:06:25.875769 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 09:06:25.875851 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 09:06:25.875947 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 09:06:25.876025 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 09:06:25.876101 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.876170 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 09:06:25.876244 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.876311 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 09:06:25.876383 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.876449 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 09:06:25.876524 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.876591 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 09:06:25.876705 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.876787 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 09:06:25.876861 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.876992 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 09:06:25.877085 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.877152 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 09:06:25.877226 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.877296 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 09:06:25.877368 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 09:06:25.877436 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 09:06:25.877511 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 09:06:25.877577 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 09:06:25.877653 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 09:06:25.877743 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 09:06:25.877812 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 09:06:25.877881 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 09:06:25.878053 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 09:06:25.878132 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 09:06:25.878206 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 09:06:25.878277 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 09:06:25.878344 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 09:06:25.878419 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 09:06:25.878489 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 09:06:25.878574 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 09:06:25.878642 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 09:06:25.878736 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 09:06:25.879051 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 09:06:25.879125 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 09:06:25.879202 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 09:06:25.879275 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 09:06:25.879343 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 09:06:25.879410 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 09:06:25.879481 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 09:06:25.879548 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 09:06:25.879615 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 09:06:25.879735 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 09:06:25.882031 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 09:06:25.882132 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 09:06:25.882206 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 09:06:25.882273 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 09:06:25.882339 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 09:06:25.882408 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 09:06:25.882477 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 09:06:25.882551 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 09:06:25.882622 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 09:06:25.882714 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 09:06:25.882785 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 09:06:25.882855 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 09:06:25.882972 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 09:06:25.883045 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 09:06:25.883121 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 09:06:25.883188 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 09:06:25.883252 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 09:06:25.883324 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 09:06:25.883389 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 09:06:25.883453 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 09:06:25.883523 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 09:06:25.883588 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 09:06:25.883657 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 09:06:25.883755 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 09:06:25.883825 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 09:06:25.883893 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 09:06:25.885178 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 09:06:25.885267 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 09:06:25.885332 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 09:06:25.885408 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 09:06:25.885472 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 09:06:25.885540 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 09:06:25.885606 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 09:06:25.885707 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 09:06:25.885781 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 09:06:25.885851 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 09:06:25.887033 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 09:06:25.887137 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 09:06:25.887207 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 09:06:25.887276 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 09:06:25.887343 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 09:06:25.887413 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 09:06:25.887486 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 09:06:25.887556 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 09:06:25.887625 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 09:06:25.887753 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 09:06:25.887829 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 09:06:25.887899 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 09:06:25.888342 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 09:06:25.888417 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 09:06:25.888514 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 09:06:25.888587 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 09:06:25.888654 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 09:06:25.888745 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 09:06:25.888813 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 09:06:25.888881 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 09:06:25.892050 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 09:06:25.892151 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 09:06:25.892232 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 09:06:25.892313 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 09:06:25.892382 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 09:06:25.892452 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 09:06:25.892526 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 09:06:25.892595 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 09:06:25.892678 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 09:06:25.892753 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 09:06:25.892825 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 09:06:25.892889 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 09:06:25.892971 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 09:06:25.893047 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 09:06:25.893118 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 09:06:25.893188 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 09:06:25.893255 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 09:06:25.893321 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 09:06:25.893393 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 09:06:25.893461 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 09:06:25.893527 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 09:06:25.893594 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 09:06:25.893659 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 09:06:25.893772 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 09:06:25.893848 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 09:06:25.898084 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 09:06:25.898193 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 09:06:25.898260 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 09:06:25.898325 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 09:06:25.898401 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 09:06:25.898478 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 09:06:25.898544 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 09:06:25.898607 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 09:06:25.898716 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 09:06:25.898802 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 09:06:25.898871 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 09:06:25.900015 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 09:06:25.900100 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 09:06:25.900169 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 09:06:25.900242 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 09:06:25.900319 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 09:06:25.900390 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 09:06:25.900458 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 09:06:25.900525 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 09:06:25.900590 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 09:06:25.900655 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 09:06:25.900745 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 09:06:25.900814 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 09:06:25.900880 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 09:06:25.901504 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 09:06:25.901586 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 09:06:25.901657 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 09:06:25.901777 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 09:06:25.901847 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 09:06:25.902136 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 09:06:25.902222 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 09:06:25.902282 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 09:06:25.902341 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 09:06:25.902412 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 09:06:25.902472 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 09:06:25.902532 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 09:06:25.902605 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 09:06:25.902684 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 09:06:25.902748 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 09:06:25.902822 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 09:06:25.902895 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 09:06:25.903000 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 09:06:25.903080 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 09:06:25.903146 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 09:06:25.903208 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 09:06:25.903285 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 09:06:25.903349 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 09:06:25.903411 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 09:06:25.903478 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 09:06:25.903542 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 09:06:25.903603 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 09:06:25.903684 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 09:06:25.903750 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 09:06:25.903816 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 09:06:25.903886 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 09:06:25.903974 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 09:06:25.904039 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 09:06:25.904108 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 09:06:25.904169 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 09:06:25.904233 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 09:06:25.904247 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 09:06:25.904255 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 09:06:25.904263 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 09:06:25.904272 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 09:06:25.904280 kernel: iommu: Default domain type: Translated Dec 13 09:06:25.904288 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 09:06:25.904296 kernel: efivars: Registered efivars operations Dec 13 09:06:25.904308 kernel: vgaarb: loaded Dec 13 09:06:25.904317 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 09:06:25.904326 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 09:06:25.904335 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 09:06:25.904342 kernel: pnp: PnP ACPI init Dec 13 09:06:25.904430 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 09:06:25.904445 kernel: pnp: PnP ACPI: found 1 devices Dec 13 09:06:25.904453 kernel: NET: Registered PF_INET protocol family Dec 13 09:06:25.904461 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 09:06:25.904469 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 09:06:25.904481 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 09:06:25.904489 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 09:06:25.904499 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 09:06:25.904507 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 09:06:25.904516 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 09:06:25.904528 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 09:06:25.904535 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 09:06:25.904610 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 09:06:25.904621 kernel: PCI: CLS 0 bytes, default 64 Dec 13 09:06:25.904631 kernel: kvm [1]: HYP mode not available Dec 13 09:06:25.904639 kernel: Initialise system trusted keyrings Dec 13 09:06:25.904647 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 09:06:25.904654 kernel: Key type asymmetric registered Dec 13 09:06:25.904694 kernel: Asymmetric key parser 'x509' registered Dec 13 09:06:25.904705 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 09:06:25.904713 kernel: io scheduler mq-deadline registered Dec 13 09:06:25.904721 kernel: io scheduler kyber registered Dec 13 09:06:25.904728 kernel: io scheduler bfq registered Dec 13 09:06:25.904740 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 09:06:25.904824 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 09:06:25.904904 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 09:06:25.906612 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.906731 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 09:06:25.906805 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 09:06:25.906871 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.906967 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 09:06:25.907036 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 09:06:25.907105 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.907173 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 09:06:25.907239 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 09:06:25.907302 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.907373 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 09:06:25.907439 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 09:06:25.907503 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.907570 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 09:06:25.907639 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 09:06:25.907731 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.907808 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 09:06:25.907875 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 09:06:25.908581 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.908683 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 09:06:25.908760 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 09:06:25.908827 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.908844 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 09:06:25.908944 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 09:06:25.909019 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 09:06:25.909086 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 09:06:25.909096 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 09:06:25.909104 kernel: ACPI: button: Power Button [PWRB] Dec 13 09:06:25.909113 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 09:06:25.909188 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 09:06:25.909266 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 09:06:25.909368 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 09:06:25.909381 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 09:06:25.909389 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 09:06:25.909462 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 09:06:25.909473 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 09:06:25.909480 kernel: thunder_xcv, ver 1.0 Dec 13 09:06:25.909492 kernel: thunder_bgx, ver 1.0 Dec 13 09:06:25.909499 kernel: nicpf, ver 1.0 Dec 13 09:06:25.909507 kernel: nicvf, ver 1.0 Dec 13 09:06:25.909592 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 09:06:25.909659 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T09:06:25 UTC (1734080785) Dec 13 09:06:25.909705 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 09:06:25.909713 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 09:06:25.909721 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 09:06:25.909733 kernel: watchdog: Hard watchdog permanently disabled Dec 13 09:06:25.909742 kernel: NET: Registered PF_INET6 protocol family Dec 13 09:06:25.909750 kernel: Segment Routing with IPv6 Dec 13 09:06:25.909758 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 09:06:25.909766 kernel: NET: Registered PF_PACKET protocol family Dec 13 09:06:25.909774 kernel: Key type dns_resolver registered Dec 13 09:06:25.909782 kernel: registered taskstats version 1 Dec 13 09:06:25.909790 kernel: Loading compiled-in X.509 certificates Dec 13 09:06:25.909797 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 09:06:25.909807 kernel: Key type .fscrypt registered Dec 13 09:06:25.909814 kernel: Key type fscrypt-provisioning registered Dec 13 09:06:25.909824 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 09:06:25.909834 kernel: ima: Allocated hash algorithm: sha1 Dec 13 09:06:25.909842 kernel: ima: No architecture policies found Dec 13 09:06:25.909851 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 09:06:25.909859 kernel: clk: Disabling unused clocks Dec 13 09:06:25.909867 kernel: Freeing unused kernel memory: 39360K Dec 13 09:06:25.909874 kernel: Run /init as init process Dec 13 09:06:25.909884 kernel: with arguments: Dec 13 09:06:25.909892 kernel: /init Dec 13 09:06:25.909900 kernel: with environment: Dec 13 09:06:25.909918 kernel: HOME=/ Dec 13 09:06:25.909927 kernel: TERM=linux Dec 13 09:06:25.909934 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 09:06:25.909944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:06:25.909954 systemd[1]: Detected virtualization kvm. Dec 13 09:06:25.909965 systemd[1]: Detected architecture arm64. Dec 13 09:06:25.909973 systemd[1]: Running in initrd. Dec 13 09:06:25.909981 systemd[1]: No hostname configured, using default hostname. Dec 13 09:06:25.909989 systemd[1]: Hostname set to . Dec 13 09:06:25.909997 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:06:25.910005 systemd[1]: Queued start job for default target initrd.target. Dec 13 09:06:25.910014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:06:25.910022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:06:25.910032 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 09:06:25.910041 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:06:25.910053 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 09:06:25.910062 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 09:06:25.910074 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 09:06:25.910084 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 09:06:25.910095 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:06:25.910105 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:06:25.910114 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:06:25.910122 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:06:25.910131 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:06:25.910139 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:06:25.910147 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:06:25.910156 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:06:25.910164 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 09:06:25.910174 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 09:06:25.910182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:06:25.910191 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:06:25.910199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:06:25.910207 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:06:25.910215 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 09:06:25.910225 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:06:25.910234 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 09:06:25.910242 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 09:06:25.910252 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:06:25.910260 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:06:25.910269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:06:25.910277 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 09:06:25.910311 systemd-journald[236]: Collecting audit messages is disabled. Dec 13 09:06:25.910334 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:06:25.910343 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 09:06:25.910352 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:06:25.910362 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:06:25.910370 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 09:06:25.910379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:06:25.910387 kernel: Bridge firewalling registered Dec 13 09:06:25.910396 systemd-journald[236]: Journal started Dec 13 09:06:25.910415 systemd-journald[236]: Runtime Journal (/run/log/journal/4c0d63d2be3e407c83257078bcbdf220) is 8.0M, max 76.5M, 68.5M free. Dec 13 09:06:25.892201 systemd-modules-load[237]: Inserted module 'overlay' Dec 13 09:06:25.912255 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:06:25.909786 systemd-modules-load[237]: Inserted module 'br_netfilter' Dec 13 09:06:25.913439 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:06:25.914446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:06:25.925096 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:06:25.934565 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:06:25.939277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:06:25.940366 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:06:25.949726 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 09:06:25.951943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:06:25.952810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:06:25.961051 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:06:25.968588 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:06:25.971713 dracut-cmdline[266]: dracut-dracut-053 Dec 13 09:06:25.975863 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 09:06:25.997145 systemd-resolved[276]: Positive Trust Anchors: Dec 13 09:06:25.997160 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:06:25.997191 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:06:26.005230 systemd-resolved[276]: Defaulting to hostname 'linux'. Dec 13 09:06:26.006302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:06:26.006973 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:06:26.063003 kernel: SCSI subsystem initialized Dec 13 09:06:26.066950 kernel: Loading iSCSI transport class v2.0-870. Dec 13 09:06:26.076022 kernel: iscsi: registered transport (tcp) Dec 13 09:06:26.091009 kernel: iscsi: registered transport (qla4xxx) Dec 13 09:06:26.091130 kernel: QLogic iSCSI HBA Driver Dec 13 09:06:26.141819 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 09:06:26.148131 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 09:06:26.168167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 09:06:26.168256 kernel: device-mapper: uevent: version 1.0.3 Dec 13 09:06:26.168946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 09:06:26.217976 kernel: raid6: neonx8 gen() 15650 MB/s Dec 13 09:06:26.234967 kernel: raid6: neonx4 gen() 15561 MB/s Dec 13 09:06:26.251974 kernel: raid6: neonx2 gen() 13120 MB/s Dec 13 09:06:26.268947 kernel: raid6: neonx1 gen() 10393 MB/s Dec 13 09:06:26.285960 kernel: raid6: int64x8 gen() 6902 MB/s Dec 13 09:06:26.302957 kernel: raid6: int64x4 gen() 7311 MB/s Dec 13 09:06:26.319961 kernel: raid6: int64x2 gen() 6098 MB/s Dec 13 09:06:26.336968 kernel: raid6: int64x1 gen() 4990 MB/s Dec 13 09:06:26.337047 kernel: raid6: using algorithm neonx8 gen() 15650 MB/s Dec 13 09:06:26.353972 kernel: raid6: .... xor() 11852 MB/s, rmw enabled Dec 13 09:06:26.354036 kernel: raid6: using neon recovery algorithm Dec 13 09:06:26.359144 kernel: xor: measuring software checksum speed Dec 13 09:06:26.359196 kernel: 8regs : 19769 MB/sec Dec 13 09:06:26.359217 kernel: 32regs : 18548 MB/sec Dec 13 09:06:26.359238 kernel: arm64_neon : 26963 MB/sec Dec 13 09:06:26.359953 kernel: xor: using function: arm64_neon (26963 MB/sec) Dec 13 09:06:26.410055 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 09:06:26.425686 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:06:26.431208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:06:26.445447 systemd-udevd[454]: Using default interface naming scheme 'v255'. Dec 13 09:06:26.448969 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:06:26.461161 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 09:06:26.481251 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Dec 13 09:06:26.521896 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:06:26.529169 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:06:26.579574 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:06:26.585214 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 09:06:26.604869 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 09:06:26.605696 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:06:26.609166 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:06:26.609773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:06:26.618118 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 09:06:26.637164 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:06:26.691761 kernel: scsi host0: Virtio SCSI HBA Dec 13 09:06:26.698481 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 09:06:26.698565 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 09:06:26.700957 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:06:26.701079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:06:26.719751 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:06:26.722995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:06:26.726076 kernel: ACPI: bus type USB registered Dec 13 09:06:26.726133 kernel: usbcore: registered new interface driver usbfs Dec 13 09:06:26.723210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:06:26.730470 kernel: usbcore: registered new interface driver hub Dec 13 09:06:26.730555 kernel: usbcore: registered new device driver usb Dec 13 09:06:26.724879 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:06:26.738319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:06:26.766960 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 09:06:26.772316 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 09:06:26.772525 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 09:06:26.772538 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 09:06:26.770286 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:06:26.779105 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 09:06:26.796103 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 09:06:26.796221 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 09:06:26.796304 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 09:06:26.796385 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 09:06:26.796466 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 09:06:26.796551 kernel: hub 1-0:1.0: USB hub found Dec 13 09:06:26.796653 kernel: hub 1-0:1.0: 4 ports detected Dec 13 09:06:26.796755 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 09:06:26.797113 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 09:06:26.804725 kernel: hub 2-0:1.0: USB hub found Dec 13 09:06:26.804878 kernel: hub 2-0:1.0: 4 ports detected Dec 13 09:06:26.805336 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 09:06:26.805444 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 09:06:26.805530 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 09:06:26.805621 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 09:06:26.805746 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 09:06:26.805758 kernel: GPT:17805311 != 80003071 Dec 13 09:06:26.805768 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 09:06:26.805777 kernel: GPT:17805311 != 80003071 Dec 13 09:06:26.805790 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 09:06:26.805799 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 09:06:26.805809 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 09:06:26.777547 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 09:06:26.810300 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:06:26.844977 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (515) Dec 13 09:06:26.847455 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (497) Dec 13 09:06:26.855049 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 09:06:26.871169 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 09:06:26.877528 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 09:06:26.878625 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 09:06:26.884506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 09:06:26.893183 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 09:06:26.906950 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 09:06:26.908442 disk-uuid[572]: Primary Header is updated. Dec 13 09:06:26.908442 disk-uuid[572]: Secondary Entries is updated. Dec 13 09:06:26.908442 disk-uuid[572]: Secondary Header is updated. Dec 13 09:06:27.035939 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 09:06:27.279245 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 09:06:27.415010 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 09:06:27.415156 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 09:06:27.415642 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 09:06:27.470078 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 09:06:27.471087 kernel: usbcore: registered new interface driver usbhid Dec 13 09:06:27.471110 kernel: usbhid: USB HID core driver Dec 13 09:06:27.926408 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 09:06:27.927225 disk-uuid[573]: The operation has completed successfully. Dec 13 09:06:27.976592 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 09:06:27.976742 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 09:06:27.993077 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 09:06:28.002517 sh[590]: Success Dec 13 09:06:28.014982 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 09:06:28.064460 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 09:06:28.079105 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 09:06:28.084223 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 09:06:28.106388 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 09:06:28.106467 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 09:06:28.106491 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 09:06:28.107059 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 09:06:28.107966 kernel: BTRFS info (device dm-0): using free space tree Dec 13 09:06:28.113947 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 09:06:28.115814 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 09:06:28.116535 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 09:06:28.126172 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 09:06:28.131113 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 09:06:28.146063 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:06:28.146119 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 09:06:28.146132 kernel: BTRFS info (device sda6): using free space tree Dec 13 09:06:28.149953 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 09:06:28.150019 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 09:06:28.161411 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 09:06:28.163027 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:06:28.169084 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 09:06:28.177507 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 09:06:28.245686 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:06:28.256149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:06:28.277395 systemd-networkd[777]: lo: Link UP Dec 13 09:06:28.277971 systemd-networkd[777]: lo: Gained carrier Dec 13 09:06:28.279043 ignition[690]: Ignition 2.19.0 Dec 13 09:06:28.279567 systemd-networkd[777]: Enumeration completed Dec 13 09:06:28.279049 ignition[690]: Stage: fetch-offline Dec 13 09:06:28.280703 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:06:28.279080 ignition[690]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:06:28.279088 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:06:28.282816 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:06:28.279247 ignition[690]: parsed url from cmdline: "" Dec 13 09:06:28.282819 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:06:28.279251 ignition[690]: no config URL provided Dec 13 09:06:28.283758 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:06:28.279255 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:06:28.284190 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:06:28.279262 ignition[690]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:06:28.284193 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:06:28.279266 ignition[690]: failed to fetch config: resource requires networking Dec 13 09:06:28.286732 systemd[1]: Reached target network.target - Network. Dec 13 09:06:28.279437 ignition[690]: Ignition finished successfully Dec 13 09:06:28.287198 systemd-networkd[777]: eth0: Link UP Dec 13 09:06:28.287201 systemd-networkd[777]: eth0: Gained carrier Dec 13 09:06:28.287209 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:06:28.290199 systemd-networkd[777]: eth1: Link UP Dec 13 09:06:28.290202 systemd-networkd[777]: eth1: Gained carrier Dec 13 09:06:28.290211 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:06:28.295127 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 09:06:28.309347 ignition[783]: Ignition 2.19.0 Dec 13 09:06:28.309357 ignition[783]: Stage: fetch Dec 13 09:06:28.309527 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:06:28.309535 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:06:28.309633 ignition[783]: parsed url from cmdline: "" Dec 13 09:06:28.309636 ignition[783]: no config URL provided Dec 13 09:06:28.309640 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 09:06:28.309676 ignition[783]: no config at "/usr/lib/ignition/user.ign" Dec 13 09:06:28.309697 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 09:06:28.310346 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 09:06:28.320029 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 09:06:28.346031 systemd-networkd[777]: eth0: DHCPv4 address 188.245.82.140/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 09:06:28.510539 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 09:06:28.517216 ignition[783]: GET result: OK Dec 13 09:06:28.517370 ignition[783]: parsing config with SHA512: bf55bab39e04b72e968cb4a931e0d149055dff83fee13f5458e3a75bb354df74df7f261aa7a84eb205a5d52a9421850f11e996a88508eebb8afc662264f79c02 Dec 13 09:06:28.523612 unknown[783]: fetched base config from "system" Dec 13 09:06:28.523623 unknown[783]: fetched base config from "system" Dec 13 09:06:28.523632 unknown[783]: fetched user config from "hetzner" Dec 13 09:06:28.525133 ignition[783]: fetch: fetch complete Dec 13 09:06:28.525144 ignition[783]: fetch: fetch passed Dec 13 09:06:28.525203 ignition[783]: Ignition finished successfully Dec 13 09:06:28.526761 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 09:06:28.534226 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 09:06:28.548126 ignition[791]: Ignition 2.19.0 Dec 13 09:06:28.548135 ignition[791]: Stage: kargs Dec 13 09:06:28.548306 ignition[791]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:06:28.548316 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:06:28.549277 ignition[791]: kargs: kargs passed Dec 13 09:06:28.549326 ignition[791]: Ignition finished successfully Dec 13 09:06:28.550432 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 09:06:28.557170 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 09:06:28.570810 ignition[797]: Ignition 2.19.0 Dec 13 09:06:28.570821 ignition[797]: Stage: disks Dec 13 09:06:28.571027 ignition[797]: no configs at "/usr/lib/ignition/base.d" Dec 13 09:06:28.571037 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:06:28.572030 ignition[797]: disks: disks passed Dec 13 09:06:28.573371 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 09:06:28.572082 ignition[797]: Ignition finished successfully Dec 13 09:06:28.574753 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 09:06:28.575806 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 09:06:28.576726 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:06:28.577774 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:06:28.578858 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:06:28.584102 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 09:06:28.600989 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 09:06:28.604976 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 09:06:28.612091 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 09:06:28.659939 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 09:06:28.660409 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 09:06:28.661844 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 09:06:28.671086 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:06:28.673976 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 09:06:28.676255 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 09:06:28.682109 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 09:06:28.682156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:06:28.691047 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (813) Dec 13 09:06:28.691570 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 09:06:28.696013 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:06:28.696063 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 09:06:28.696958 kernel: BTRFS info (device sda6): using free space tree Dec 13 09:06:28.698182 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 09:06:28.704399 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 09:06:28.704460 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 09:06:28.710661 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:06:28.749944 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 09:06:28.756520 coreos-metadata[815]: Dec 13 09:06:28.756 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 09:06:28.758410 coreos-metadata[815]: Dec 13 09:06:28.758 INFO Fetch successful Dec 13 09:06:28.760071 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 09:06:28.761352 coreos-metadata[815]: Dec 13 09:06:28.761 INFO wrote hostname ci-4081-2-1-a-d14f804a70 to /sysroot/etc/hostname Dec 13 09:06:28.764360 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:06:28.766321 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 09:06:28.771859 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 09:06:28.873883 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 09:06:28.881149 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 09:06:28.886055 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 09:06:28.896035 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:06:28.920299 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 09:06:28.925437 ignition[930]: INFO : Ignition 2.19.0 Dec 13 09:06:28.925437 ignition[930]: INFO : Stage: mount Dec 13 09:06:28.927589 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:06:28.927589 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:06:28.927589 ignition[930]: INFO : mount: mount passed Dec 13 09:06:28.927589 ignition[930]: INFO : Ignition finished successfully Dec 13 09:06:28.927868 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 09:06:28.936383 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 09:06:29.107613 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 09:06:29.115534 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 09:06:29.135568 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Dec 13 09:06:29.135652 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 09:06:29.135679 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 09:06:29.136113 kernel: BTRFS info (device sda6): using free space tree Dec 13 09:06:29.140943 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 09:06:29.141024 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 09:06:29.145029 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 09:06:29.171614 ignition[959]: INFO : Ignition 2.19.0 Dec 13 09:06:29.171614 ignition[959]: INFO : Stage: files Dec 13 09:06:29.172810 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:06:29.172810 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:06:29.175719 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Dec 13 09:06:29.175719 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 09:06:29.175719 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 09:06:29.179747 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 09:06:29.181037 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 09:06:29.181037 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 09:06:29.180206 unknown[959]: wrote ssh authorized keys file for user: core Dec 13 09:06:29.183263 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 09:06:29.183263 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 09:06:29.954540 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 09:06:29.975538 systemd-networkd[777]: eth0: Gained IPv6LL Dec 13 09:06:30.295136 systemd-networkd[777]: eth1: Gained IPv6LL Dec 13 09:06:33.690324 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 09:06:33.692959 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 09:06:33.692959 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 09:06:34.265926 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 09:06:34.353928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 09:06:34.355455 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 09:06:34.355455 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 09:06:34.355455 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:06:34.355455 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 09:06:34.355455 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 09:06:34.361319 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 09:06:34.906456 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 09:06:35.119234 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 09:06:35.119234 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 09:06:35.123026 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 09:06:35.123026 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:06:35.123026 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 09:06:35.123026 ignition[959]: INFO : files: files passed Dec 13 09:06:35.123026 ignition[959]: INFO : Ignition finished successfully Dec 13 09:06:35.124700 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 09:06:35.135728 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 09:06:35.138261 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 09:06:35.142291 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 09:06:35.143135 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 09:06:35.159523 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:06:35.159523 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:06:35.163853 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 09:06:35.166137 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:06:35.167535 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 09:06:35.176473 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 09:06:35.206725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 09:06:35.206946 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 09:06:35.210135 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 09:06:35.211087 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 09:06:35.212112 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 09:06:35.214475 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 09:06:35.235237 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:06:35.241146 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 09:06:35.256872 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:06:35.258356 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:06:35.259133 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 09:06:35.260240 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 09:06:35.260367 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 09:06:35.262272 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 09:06:35.262904 systemd[1]: Stopped target basic.target - Basic System. Dec 13 09:06:35.264115 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 09:06:35.265479 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 09:06:35.266580 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 09:06:35.267658 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 09:06:35.268746 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 09:06:35.269958 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 09:06:35.270969 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 09:06:35.272093 systemd[1]: Stopped target swap.target - Swaps. Dec 13 09:06:35.272996 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 09:06:35.273126 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 09:06:35.274407 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:06:35.275070 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:06:35.276136 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 09:06:35.277950 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:06:35.279101 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 09:06:35.279226 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 09:06:35.280868 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 09:06:35.281010 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 09:06:35.282214 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 09:06:35.282308 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 09:06:35.283331 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 09:06:35.283423 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 09:06:35.292413 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 09:06:35.298678 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 09:06:35.299301 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 09:06:35.299454 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:06:35.302260 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 09:06:35.302379 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 09:06:35.310981 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 09:06:35.311703 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 09:06:35.317736 ignition[1012]: INFO : Ignition 2.19.0 Dec 13 09:06:35.318748 ignition[1012]: INFO : Stage: umount Dec 13 09:06:35.323523 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 09:06:35.323523 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 09:06:35.323523 ignition[1012]: INFO : umount: umount passed Dec 13 09:06:35.323523 ignition[1012]: INFO : Ignition finished successfully Dec 13 09:06:35.322982 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 09:06:35.324408 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 09:06:35.324633 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 09:06:35.325758 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 09:06:35.325901 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 09:06:35.327512 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 09:06:35.327644 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 09:06:35.329946 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 09:06:35.330005 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 09:06:35.331888 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 09:06:35.332014 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 09:06:35.332840 systemd[1]: Stopped target network.target - Network. Dec 13 09:06:35.333671 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 09:06:35.333775 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 09:06:35.334675 systemd[1]: Stopped target paths.target - Path Units. Dec 13 09:06:35.335499 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 09:06:35.343015 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:06:35.344159 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 09:06:35.346077 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 09:06:35.347147 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 09:06:35.347236 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 09:06:35.348173 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 09:06:35.348215 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 09:06:35.349201 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 09:06:35.349261 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 09:06:35.350262 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 09:06:35.350306 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 09:06:35.351218 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 09:06:35.351259 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 09:06:35.352360 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 09:06:35.353195 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 09:06:35.358003 systemd-networkd[777]: eth0: DHCPv6 lease lost Dec 13 09:06:35.362564 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 09:06:35.362945 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 09:06:35.365227 systemd-networkd[777]: eth1: DHCPv6 lease lost Dec 13 09:06:35.368327 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 09:06:35.368508 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 09:06:35.370379 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 09:06:35.370460 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:06:35.376210 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 09:06:35.376965 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 09:06:35.377049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 09:06:35.380950 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:06:35.381019 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:06:35.384111 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 09:06:35.384251 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 09:06:35.387188 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 09:06:35.387255 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:06:35.388286 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:06:35.398563 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 09:06:35.398759 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 09:06:35.401238 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 09:06:35.401990 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:06:35.403458 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 09:06:35.403536 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 09:06:35.404713 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 09:06:35.404750 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:06:35.405822 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 09:06:35.405876 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 09:06:35.408403 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 09:06:35.408503 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 09:06:35.410445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 09:06:35.410502 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 09:06:35.417214 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 09:06:35.418355 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 09:06:35.418450 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:06:35.421977 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 09:06:35.422040 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:06:35.422740 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 09:06:35.422787 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:06:35.424250 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 09:06:35.424302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:06:35.426256 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 09:06:35.426362 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 09:06:35.427739 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 09:06:35.434168 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 09:06:35.445712 systemd[1]: Switching root. Dec 13 09:06:35.478546 systemd-journald[236]: Journal stopped Dec 13 09:06:36.444860 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Dec 13 09:06:36.444955 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 09:06:36.444976 kernel: SELinux: policy capability open_perms=1 Dec 13 09:06:36.444986 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 09:06:36.444997 kernel: SELinux: policy capability always_check_network=0 Dec 13 09:06:36.445011 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 09:06:36.445021 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 09:06:36.445035 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 09:06:36.445049 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 09:06:36.445059 kernel: audit: type=1403 audit(1734080795.654:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 09:06:36.445069 systemd[1]: Successfully loaded SELinux policy in 36.650ms. Dec 13 09:06:36.445093 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.742ms. Dec 13 09:06:36.445105 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 09:06:36.445116 systemd[1]: Detected virtualization kvm. Dec 13 09:06:36.445128 systemd[1]: Detected architecture arm64. Dec 13 09:06:36.445140 systemd[1]: Detected first boot. Dec 13 09:06:36.445151 systemd[1]: Hostname set to . Dec 13 09:06:36.445161 systemd[1]: Initializing machine ID from VM UUID. Dec 13 09:06:36.445172 zram_generator::config[1055]: No configuration found. Dec 13 09:06:36.445183 systemd[1]: Populated /etc with preset unit settings. Dec 13 09:06:36.445194 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 09:06:36.445204 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 09:06:36.445214 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 09:06:36.445227 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 09:06:36.445237 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 09:06:36.445248 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 09:06:36.445258 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 09:06:36.445268 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 09:06:36.445278 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 09:06:36.445288 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 09:06:36.445299 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 09:06:36.445309 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 09:06:36.445321 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 09:06:36.445332 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 09:06:36.445342 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 09:06:36.445352 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 09:06:36.445363 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 09:06:36.445378 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 09:06:36.445388 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 09:06:36.445399 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 09:06:36.445412 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 09:06:36.445422 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 09:06:36.445433 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 09:06:36.445443 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 09:06:36.445457 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 09:06:36.445468 systemd[1]: Reached target slices.target - Slice Units. Dec 13 09:06:36.445478 systemd[1]: Reached target swap.target - Swaps. Dec 13 09:06:36.445494 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 09:06:36.445504 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 09:06:36.445515 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 09:06:36.445525 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 09:06:36.445536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 09:06:36.445547 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 09:06:36.445557 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 09:06:36.445567 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 09:06:36.445610 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 09:06:36.445625 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 09:06:36.445636 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 09:06:36.445646 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 09:06:36.445658 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 09:06:36.445672 systemd[1]: Reached target machines.target - Containers. Dec 13 09:06:36.445684 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 09:06:36.445698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:06:36.445708 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 09:06:36.445719 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 09:06:36.445730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:06:36.445740 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:06:36.445750 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:06:36.445761 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 09:06:36.446984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:06:36.447021 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 09:06:36.447033 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 09:06:36.447045 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 09:06:36.447055 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 09:06:36.447066 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 09:06:36.447076 kernel: fuse: init (API version 7.39) Dec 13 09:06:36.447087 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 09:06:36.447098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 09:06:36.447109 kernel: loop: module loaded Dec 13 09:06:36.447121 kernel: ACPI: bus type drm_connector registered Dec 13 09:06:36.447132 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 09:06:36.447143 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 09:06:36.447153 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 09:06:36.447165 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 09:06:36.447176 systemd[1]: Stopped verity-setup.service. Dec 13 09:06:36.447186 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 09:06:36.447197 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 09:06:36.447208 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 09:06:36.447221 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 09:06:36.447232 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 09:06:36.447243 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 09:06:36.447255 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 09:06:36.447265 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 09:06:36.447278 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 09:06:36.447288 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 09:06:36.447299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:06:36.447337 systemd-journald[1129]: Collecting audit messages is disabled. Dec 13 09:06:36.447361 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:06:36.447373 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:06:36.447386 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:06:36.447397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:06:36.447409 systemd-journald[1129]: Journal started Dec 13 09:06:36.447431 systemd-journald[1129]: Runtime Journal (/run/log/journal/4c0d63d2be3e407c83257078bcbdf220) is 8.0M, max 76.5M, 68.5M free. Dec 13 09:06:36.157322 systemd[1]: Queued start job for default target multi-user.target. Dec 13 09:06:36.181552 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 09:06:36.448961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:06:36.182391 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 09:06:36.451018 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 09:06:36.452729 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 09:06:36.452905 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 09:06:36.453802 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:06:36.453969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:06:36.454960 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 09:06:36.455867 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 09:06:36.457133 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 09:06:36.470553 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 09:06:36.476052 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 09:06:36.483493 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 09:06:36.485149 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 09:06:36.485199 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 09:06:36.489220 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 09:06:36.502114 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 09:06:36.506094 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 09:06:36.507142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:06:36.518178 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 09:06:36.520827 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 09:06:36.521749 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:06:36.525162 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 09:06:36.527615 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:06:36.529110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:06:36.535245 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 09:06:36.540118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 09:06:36.543878 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 09:06:36.547112 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 09:06:36.548041 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 09:06:36.549676 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 09:06:36.563141 systemd-journald[1129]: Time spent on flushing to /var/log/journal/4c0d63d2be3e407c83257078bcbdf220 is 74.829ms for 1130 entries. Dec 13 09:06:36.563141 systemd-journald[1129]: System Journal (/var/log/journal/4c0d63d2be3e407c83257078bcbdf220) is 8.0M, max 584.8M, 576.8M free. Dec 13 09:06:36.656812 systemd-journald[1129]: Received client request to flush runtime journal. Dec 13 09:06:36.656872 kernel: loop0: detected capacity change from 0 to 8 Dec 13 09:06:36.656892 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 09:06:36.656922 kernel: loop1: detected capacity change from 0 to 189592 Dec 13 09:06:36.562201 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 09:06:36.576978 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 09:06:36.578281 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 09:06:36.587089 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 09:06:36.600277 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:06:36.611514 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Dec 13 09:06:36.611525 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Dec 13 09:06:36.621638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 09:06:36.628273 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 09:06:36.631943 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 09:06:36.662345 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 09:06:36.673172 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 09:06:36.675734 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 09:06:36.703609 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 09:06:36.729009 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 09:06:36.741070 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 09:06:36.747036 kernel: loop3: detected capacity change from 0 to 114328 Dec 13 09:06:36.779766 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 09:06:36.779786 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Dec 13 09:06:36.791376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 09:06:36.798960 kernel: loop4: detected capacity change from 0 to 8 Dec 13 09:06:36.799761 kernel: loop5: detected capacity change from 0 to 189592 Dec 13 09:06:36.823019 kernel: loop6: detected capacity change from 0 to 114432 Dec 13 09:06:36.849181 kernel: loop7: detected capacity change from 0 to 114328 Dec 13 09:06:36.861183 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 09:06:36.862715 (sd-merge)[1197]: Merged extensions into '/usr'. Dec 13 09:06:36.869216 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 09:06:36.869232 systemd[1]: Reloading... Dec 13 09:06:37.017120 zram_generator::config[1220]: No configuration found. Dec 13 09:06:37.072994 ldconfig[1164]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 09:06:37.158183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:06:37.205335 systemd[1]: Reloading finished in 335 ms. Dec 13 09:06:37.228016 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 09:06:37.231844 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 09:06:37.246263 systemd[1]: Starting ensure-sysext.service... Dec 13 09:06:37.250223 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 09:06:37.253527 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 09:06:37.266256 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 09:06:37.267152 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Dec 13 09:06:37.267161 systemd[1]: Reloading... Dec 13 09:06:37.282365 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 09:06:37.283084 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 09:06:37.284200 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 09:06:37.285161 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 09:06:37.285367 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Dec 13 09:06:37.289736 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:06:37.289747 systemd-tmpfiles[1262]: Skipping /boot Dec 13 09:06:37.300220 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 09:06:37.300233 systemd-tmpfiles[1262]: Skipping /boot Dec 13 09:06:37.324633 systemd-udevd[1264]: Using default interface naming scheme 'v255'. Dec 13 09:06:37.353962 zram_generator::config[1291]: No configuration found. Dec 13 09:06:37.492953 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1304) Dec 13 09:06:37.511928 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 09:06:37.522344 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1304) Dec 13 09:06:37.538238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:06:37.589027 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1303) Dec 13 09:06:37.609258 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 09:06:37.609818 systemd[1]: Reloading finished in 342 ms. Dec 13 09:06:37.622206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 09:06:37.624985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 09:06:37.668641 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 09:06:37.678221 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:06:37.684213 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 09:06:37.686088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:06:37.693115 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 09:06:37.693202 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 09:06:37.693219 kernel: [drm] features: -context_init Dec 13 09:06:37.693968 kernel: [drm] number of scanouts: 1 Dec 13 09:06:37.694023 kernel: [drm] number of cap sets: 0 Dec 13 09:06:37.694059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 09:06:37.697340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 09:06:37.699196 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 09:06:37.700728 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:06:37.702351 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 09:06:37.704992 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 09:06:37.708177 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 09:06:37.714099 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 09:06:37.717852 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 09:06:37.722124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:06:37.722290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:06:37.728812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 09:06:37.732929 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 09:06:37.735092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 09:06:37.738692 systemd[1]: Finished ensure-sysext.service. Dec 13 09:06:37.745191 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 09:06:37.770941 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 09:06:37.791287 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 09:06:37.795927 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 09:06:37.802013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 09:06:37.802173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 09:06:37.805732 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 09:06:37.805902 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 09:06:37.808653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 09:06:37.809409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 09:06:37.812944 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 09:06:37.823773 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 09:06:37.825984 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 09:06:37.839240 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 09:06:37.849406 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 09:06:37.850113 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 09:06:37.851974 augenrules[1403]: No rules Dec 13 09:06:37.857668 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 09:06:37.866044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 09:06:37.868877 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:06:37.870228 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 09:06:37.876816 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 09:06:37.889347 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 09:06:37.890048 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 09:06:37.890279 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 09:06:37.891978 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 09:06:37.923539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 09:06:37.972824 systemd-networkd[1375]: lo: Link UP Dec 13 09:06:37.972832 systemd-networkd[1375]: lo: Gained carrier Dec 13 09:06:37.978359 systemd-networkd[1375]: Enumeration completed Dec 13 09:06:37.978526 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 09:06:37.983214 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:06:37.983225 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:06:37.983999 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:06:37.984009 systemd-networkd[1375]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 09:06:37.984463 systemd-networkd[1375]: eth0: Link UP Dec 13 09:06:37.984466 systemd-networkd[1375]: eth0: Gained carrier Dec 13 09:06:37.984480 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:06:37.987124 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 09:06:37.987800 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 09:06:37.988653 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 09:06:37.989268 systemd-networkd[1375]: eth1: Link UP Dec 13 09:06:37.990018 systemd-networkd[1375]: eth1: Gained carrier Dec 13 09:06:37.990046 systemd-networkd[1375]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 09:06:37.992701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 09:06:38.003068 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 09:06:38.003597 systemd-resolved[1377]: Positive Trust Anchors: Dec 13 09:06:38.003863 systemd-resolved[1377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 09:06:38.003899 systemd-resolved[1377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 09:06:38.009138 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 09:06:38.013616 systemd-resolved[1377]: Using system hostname 'ci-4081-2-1-a-d14f804a70'. Dec 13 09:06:38.016354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 09:06:38.017164 systemd[1]: Reached target network.target - Network. Dec 13 09:06:38.017677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 09:06:38.023713 systemd-networkd[1375]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 09:06:38.024471 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Dec 13 09:06:38.028879 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:06:38.058039 systemd-networkd[1375]: eth0: DHCPv4 address 188.245.82.140/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 09:06:38.059362 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Dec 13 09:06:38.060986 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Dec 13 09:06:38.063122 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 09:06:38.064454 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 09:06:38.065538 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 09:06:38.066679 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 09:06:38.067831 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 09:06:38.068823 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 09:06:38.069664 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 09:06:38.070401 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 09:06:38.071115 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 09:06:38.071155 systemd[1]: Reached target paths.target - Path Units. Dec 13 09:06:38.071642 systemd[1]: Reached target timers.target - Timer Units. Dec 13 09:06:38.074587 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 09:06:38.077309 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 09:06:38.085626 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 09:06:38.088167 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 09:06:38.089752 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 09:06:38.090804 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 09:06:38.091692 systemd[1]: Reached target basic.target - Basic System. Dec 13 09:06:38.092513 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:06:38.092546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 09:06:38.102217 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 09:06:38.109287 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 09:06:38.111025 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 09:06:38.111594 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 09:06:38.117087 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 09:06:38.121166 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 09:06:38.122998 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 09:06:38.125737 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 09:06:38.130063 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 09:06:38.134992 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 09:06:38.140298 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 09:06:38.144720 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 09:06:38.149732 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 09:06:38.152826 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 09:06:38.153355 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 09:06:38.157110 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 09:06:38.159034 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 09:06:38.168936 jq[1441]: false Dec 13 09:06:38.174322 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 09:06:38.186549 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 09:06:38.187758 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 09:06:38.192827 dbus-daemon[1440]: [system] SELinux support is enabled Dec 13 09:06:38.193053 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 09:06:38.199514 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 09:06:38.199549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 09:06:38.201080 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 09:06:38.201108 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 09:06:38.217316 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 09:06:38.223645 jq[1453]: true Dec 13 09:06:38.217530 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 09:06:38.241140 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 09:06:38.253070 extend-filesystems[1442]: Found loop4 Dec 13 09:06:38.253649 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 09:06:38.253847 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 09:06:38.257976 extend-filesystems[1442]: Found loop5 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found loop6 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found loop7 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found sda Dec 13 09:06:38.257976 extend-filesystems[1442]: Found sda1 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found sda2 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found sda3 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found usr Dec 13 09:06:38.257976 extend-filesystems[1442]: Found sda4 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found sda6 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found sda7 Dec 13 09:06:38.257976 extend-filesystems[1442]: Found sda9 Dec 13 09:06:38.257976 extend-filesystems[1442]: Checking size of /dev/sda9 Dec 13 09:06:38.317163 coreos-metadata[1439]: Dec 13 09:06:38.262 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 09:06:38.317163 coreos-metadata[1439]: Dec 13 09:06:38.265 INFO Fetch successful Dec 13 09:06:38.317163 coreos-metadata[1439]: Dec 13 09:06:38.269 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 09:06:38.317163 coreos-metadata[1439]: Dec 13 09:06:38.277 INFO Fetch successful Dec 13 09:06:38.317341 update_engine[1452]: I20241213 09:06:38.302534 1452 main.cc:92] Flatcar Update Engine starting Dec 13 09:06:38.317552 jq[1474]: true Dec 13 09:06:38.317636 tar[1465]: linux-arm64/helm Dec 13 09:06:38.317808 extend-filesystems[1442]: Resized partition /dev/sda9 Dec 13 09:06:38.326706 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 09:06:38.320002 systemd[1]: Started update-engine.service - Update Engine. Dec 13 09:06:38.326988 update_engine[1452]: I20241213 09:06:38.320181 1452 update_check_scheduler.cc:74] Next update check in 11m25s Dec 13 09:06:38.327037 extend-filesystems[1487]: resize2fs 1.47.1 (20-May-2024) Dec 13 09:06:38.336235 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 09:06:38.376991 systemd-logind[1451]: New seat seat0. Dec 13 09:06:38.389196 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 09:06:38.389219 systemd-logind[1451]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 09:06:38.391591 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 09:06:38.447686 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:06:38.462544 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 09:06:38.475206 systemd[1]: Starting sshkeys.service... Dec 13 09:06:38.480962 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 09:06:38.485498 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 09:06:38.514934 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1310) Dec 13 09:06:38.538156 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 09:06:38.540135 extend-filesystems[1487]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 09:06:38.540135 extend-filesystems[1487]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 09:06:38.540135 extend-filesystems[1487]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 09:06:38.550069 extend-filesystems[1442]: Resized filesystem in /dev/sda9 Dec 13 09:06:38.550069 extend-filesystems[1442]: Found sr0 Dec 13 09:06:38.545245 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 09:06:38.545434 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 09:06:38.562007 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 09:06:38.574029 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 09:06:38.616375 coreos-metadata[1521]: Dec 13 09:06:38.614 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 09:06:38.616819 containerd[1467]: time="2024-12-13T09:06:38.616723160Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 09:06:38.620246 coreos-metadata[1521]: Dec 13 09:06:38.617 INFO Fetch successful Dec 13 09:06:38.623270 unknown[1521]: wrote ssh authorized keys file for user: core Dec 13 09:06:38.652637 update-ssh-keys[1527]: Updated "/home/core/.ssh/authorized_keys" Dec 13 09:06:38.654233 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 09:06:38.662963 systemd[1]: Finished sshkeys.service. Dec 13 09:06:38.672176 containerd[1467]: time="2024-12-13T09:06:38.672121960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:06:38.678184 containerd[1467]: time="2024-12-13T09:06:38.678127480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:06:38.678450 containerd[1467]: time="2024-12-13T09:06:38.678378840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 09:06:38.678450 containerd[1467]: time="2024-12-13T09:06:38.678417280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 09:06:38.680034 containerd[1467]: time="2024-12-13T09:06:38.679990080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 09:06:38.680142 containerd[1467]: time="2024-12-13T09:06:38.680128960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 09:06:38.680314 containerd[1467]: time="2024-12-13T09:06:38.680296480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:06:38.680461 containerd[1467]: time="2024-12-13T09:06:38.680370760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683056280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683091760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683107880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683118120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683247240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683498000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683681920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683700240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683784680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 09:06:38.683934 containerd[1467]: time="2024-12-13T09:06:38.683827200Z" level=info msg="metadata content store policy set" policy=shared Dec 13 09:06:38.689945 containerd[1467]: time="2024-12-13T09:06:38.689893080Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 09:06:38.690228 containerd[1467]: time="2024-12-13T09:06:38.690211640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 09:06:38.691952 containerd[1467]: time="2024-12-13T09:06:38.691934320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 09:06:38.692023 containerd[1467]: time="2024-12-13T09:06:38.692009720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 09:06:38.692079 containerd[1467]: time="2024-12-13T09:06:38.692067560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 09:06:38.692380 containerd[1467]: time="2024-12-13T09:06:38.692354480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 09:06:38.694917 containerd[1467]: time="2024-12-13T09:06:38.692762240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 09:06:38.695044 containerd[1467]: time="2024-12-13T09:06:38.695023400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 09:06:38.695116 containerd[1467]: time="2024-12-13T09:06:38.695104080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695198240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695220360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695238320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695266640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695282560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695298760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695312080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695332760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695354560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695377160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695392800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695412960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695428120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695598 containerd[1467]: time="2024-12-13T09:06:38.695444360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695857 containerd[1467]: time="2024-12-13T09:06:38.695462760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695857 containerd[1467]: time="2024-12-13T09:06:38.695482880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695857 containerd[1467]: time="2024-12-13T09:06:38.695497520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695857 containerd[1467]: time="2024-12-13T09:06:38.695510600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695857 containerd[1467]: time="2024-12-13T09:06:38.695525720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.695857 containerd[1467]: time="2024-12-13T09:06:38.695538640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.696193 containerd[1467]: time="2024-12-13T09:06:38.695996600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.696193 containerd[1467]: time="2024-12-13T09:06:38.696025520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.696193 containerd[1467]: time="2024-12-13T09:06:38.696052800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 09:06:38.696193 containerd[1467]: time="2024-12-13T09:06:38.696076920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.696193 containerd[1467]: time="2024-12-13T09:06:38.696089280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.696193 containerd[1467]: time="2024-12-13T09:06:38.696101240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 09:06:38.696386 containerd[1467]: time="2024-12-13T09:06:38.696369080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 09:06:38.696533 containerd[1467]: time="2024-12-13T09:06:38.696435720Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 09:06:38.696533 containerd[1467]: time="2024-12-13T09:06:38.696453040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 09:06:38.696533 containerd[1467]: time="2024-12-13T09:06:38.696466240Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 09:06:38.696533 containerd[1467]: time="2024-12-13T09:06:38.696476640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.696533 containerd[1467]: time="2024-12-13T09:06:38.696491200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 09:06:38.696533 containerd[1467]: time="2024-12-13T09:06:38.696503880Z" level=info msg="NRI interface is disabled by configuration." Dec 13 09:06:38.696533 containerd[1467]: time="2024-12-13T09:06:38.696514440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 09:06:38.699176 containerd[1467]: time="2024-12-13T09:06:38.699102440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 09:06:38.699625 containerd[1467]: time="2024-12-13T09:06:38.699340880Z" level=info msg="Connect containerd service" Dec 13 09:06:38.699625 containerd[1467]: time="2024-12-13T09:06:38.699387160Z" level=info msg="using legacy CRI server" Dec 13 09:06:38.699625 containerd[1467]: time="2024-12-13T09:06:38.699395760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 09:06:38.699625 containerd[1467]: time="2024-12-13T09:06:38.699494160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 09:06:38.700407 containerd[1467]: time="2024-12-13T09:06:38.700376960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:06:38.702024 containerd[1467]: time="2024-12-13T09:06:38.701969160Z" level=info msg="Start subscribing containerd event" Dec 13 09:06:38.702090 containerd[1467]: time="2024-12-13T09:06:38.702041360Z" level=info msg="Start recovering state" Dec 13 09:06:38.702145 containerd[1467]: time="2024-12-13T09:06:38.702126560Z" level=info msg="Start event monitor" Dec 13 09:06:38.702145 containerd[1467]: time="2024-12-13T09:06:38.702143240Z" level=info msg="Start snapshots syncer" Dec 13 09:06:38.702198 containerd[1467]: time="2024-12-13T09:06:38.702154000Z" level=info msg="Start cni network conf syncer for default" Dec 13 09:06:38.702198 containerd[1467]: time="2024-12-13T09:06:38.702161880Z" level=info msg="Start streaming server" Dec 13 09:06:38.703054 containerd[1467]: time="2024-12-13T09:06:38.703032040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 09:06:38.704023 containerd[1467]: time="2024-12-13T09:06:38.704001920Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 09:06:38.704237 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 09:06:38.713382 containerd[1467]: time="2024-12-13T09:06:38.713327680Z" level=info msg="containerd successfully booted in 0.098965s" Dec 13 09:06:38.774080 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 09:06:38.941378 tar[1465]: linux-arm64/LICENSE Dec 13 09:06:38.941478 tar[1465]: linux-arm64/README.md Dec 13 09:06:38.953002 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 09:06:39.403128 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 09:06:39.425130 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 09:06:39.430290 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 09:06:39.444086 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 09:06:39.444997 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 09:06:39.452324 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 09:06:39.464991 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 09:06:39.471273 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 09:06:39.484606 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 09:06:39.486702 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 09:06:39.895316 systemd-networkd[1375]: eth1: Gained IPv6LL Dec 13 09:06:39.896266 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Dec 13 09:06:39.899965 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 09:06:39.901898 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 09:06:39.909244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:06:39.913144 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 09:06:39.937084 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 09:06:39.960139 systemd-networkd[1375]: eth0: Gained IPv6LL Dec 13 09:06:39.960846 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Dec 13 09:06:40.613962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:06:40.615470 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 09:06:40.616422 systemd[1]: Startup finished in 761ms (kernel) + 9.970s (initrd) + 4.998s (userspace) = 15.729s. Dec 13 09:06:40.629471 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:06:41.151371 kubelet[1572]: E1213 09:06:41.151284 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:06:41.154034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:06:41.154313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:06:51.405110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 09:06:51.411226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:06:51.553161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:06:51.554059 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:06:51.607477 kubelet[1592]: E1213 09:06:51.607357 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:06:51.611202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:06:51.611544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:07:01.862499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 09:07:01.872278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:07:01.988244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:07:01.989669 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:07:02.038284 kubelet[1607]: E1213 09:07:02.038220 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:07:02.042390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:07:02.042663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:07:10.160353 systemd-timesyncd[1386]: Contacted time server 194.50.19.204:123 (2.flatcar.pool.ntp.org). Dec 13 09:07:10.160774 systemd-timesyncd[1386]: Initial clock synchronization to Fri 2024-12-13 09:07:10.171953 UTC. Dec 13 09:07:12.128140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 09:07:12.142289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:07:12.268985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:07:12.274998 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:07:12.322560 kubelet[1622]: E1213 09:07:12.322495 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:07:12.325215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:07:12.325483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:07:22.378423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 09:07:22.389338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:07:22.513194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:07:22.525813 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:07:22.568081 kubelet[1637]: E1213 09:07:22.568025 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:07:22.570207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:07:22.570338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:07:23.932523 update_engine[1452]: I20241213 09:07:23.932380 1452 update_attempter.cc:509] Updating boot flags... Dec 13 09:07:23.982982 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1653) Dec 13 09:07:24.035420 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1649) Dec 13 09:07:32.628513 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 09:07:32.639316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:07:32.751267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:07:32.756228 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:07:32.795248 kubelet[1670]: E1213 09:07:32.795105 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:07:32.797868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:07:32.798104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:07:35.482468 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 09:07:35.483979 systemd[1]: Started sshd@0-188.245.82.140:22-95.85.47.10:41966.service - OpenSSH per-connection server daemon (95.85.47.10:41966). Dec 13 09:07:35.650980 sshd[1678]: Invalid user test from 95.85.47.10 port 41966 Dec 13 09:07:35.668273 sshd[1678]: Received disconnect from 95.85.47.10 port 41966:11: Bye Bye [preauth] Dec 13 09:07:35.668273 sshd[1678]: Disconnected from invalid user test 95.85.47.10 port 41966 [preauth] Dec 13 09:07:35.669790 systemd[1]: sshd@0-188.245.82.140:22-95.85.47.10:41966.service: Deactivated successfully. Dec 13 09:07:42.878659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 09:07:42.887251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:07:43.006181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:07:43.008575 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:07:43.050477 kubelet[1690]: E1213 09:07:43.050347 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:07:43.052971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:07:43.053102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:07:53.128208 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 09:07:53.135279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:07:53.273181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:07:53.276169 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:07:53.315421 kubelet[1705]: E1213 09:07:53.315371 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:07:53.318220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:07:53.318401 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:08:03.378187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 09:08:03.395294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:03.511995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:03.524822 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:08:03.566694 kubelet[1720]: E1213 09:08:03.566647 1720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:08:03.570210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:08:03.570513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:08:13.628301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 09:08:13.636305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:13.780090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:13.791505 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:08:13.837734 kubelet[1735]: E1213 09:08:13.837653 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:08:13.840721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:08:13.841048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:08:22.027307 systemd[1]: Started sshd@1-188.245.82.140:22-139.178.89.65:59306.service - OpenSSH per-connection server daemon (139.178.89.65:59306). Dec 13 09:08:23.013996 sshd[1743]: Accepted publickey for core from 139.178.89.65 port 59306 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:23.017231 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:23.026929 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 09:08:23.032344 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 09:08:23.036782 systemd-logind[1451]: New session 1 of user core. Dec 13 09:08:23.045714 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 09:08:23.052266 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 09:08:23.057706 (systemd)[1747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 09:08:23.165850 systemd[1747]: Queued start job for default target default.target. Dec 13 09:08:23.173793 systemd[1747]: Created slice app.slice - User Application Slice. Dec 13 09:08:23.173862 systemd[1747]: Reached target paths.target - Paths. Dec 13 09:08:23.173893 systemd[1747]: Reached target timers.target - Timers. Dec 13 09:08:23.176402 systemd[1747]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 09:08:23.195098 systemd[1747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 09:08:23.195288 systemd[1747]: Reached target sockets.target - Sockets. Dec 13 09:08:23.195315 systemd[1747]: Reached target basic.target - Basic System. Dec 13 09:08:23.195396 systemd[1747]: Reached target default.target - Main User Target. Dec 13 09:08:23.195447 systemd[1747]: Startup finished in 131ms. Dec 13 09:08:23.195685 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 09:08:23.204243 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 09:08:23.878292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 09:08:23.885372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:23.903014 systemd[1]: Started sshd@2-188.245.82.140:22-139.178.89.65:59314.service - OpenSSH per-connection server daemon (139.178.89.65:59314). Dec 13 09:08:24.000033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:24.005800 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:08:24.044620 kubelet[1768]: E1213 09:08:24.044482 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:08:24.046948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:08:24.047105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:08:24.889999 sshd[1761]: Accepted publickey for core from 139.178.89.65 port 59314 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:24.891947 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:24.896607 systemd-logind[1451]: New session 2 of user core. Dec 13 09:08:24.908305 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 09:08:25.566661 sshd[1761]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:25.571529 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Dec 13 09:08:25.572463 systemd[1]: sshd@2-188.245.82.140:22-139.178.89.65:59314.service: Deactivated successfully. Dec 13 09:08:25.575086 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 09:08:25.578106 systemd-logind[1451]: Removed session 2. Dec 13 09:08:25.740453 systemd[1]: Started sshd@3-188.245.82.140:22-139.178.89.65:59324.service - OpenSSH per-connection server daemon (139.178.89.65:59324). Dec 13 09:08:26.734147 sshd[1780]: Accepted publickey for core from 139.178.89.65 port 59324 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:26.736349 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:26.742802 systemd-logind[1451]: New session 3 of user core. Dec 13 09:08:26.754327 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 09:08:27.412222 sshd[1780]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:27.417037 systemd[1]: sshd@3-188.245.82.140:22-139.178.89.65:59324.service: Deactivated successfully. Dec 13 09:08:27.419449 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 09:08:27.421681 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Dec 13 09:08:27.424371 systemd-logind[1451]: Removed session 3. Dec 13 09:08:27.592407 systemd[1]: Started sshd@4-188.245.82.140:22-139.178.89.65:59332.service - OpenSSH per-connection server daemon (139.178.89.65:59332). Dec 13 09:08:28.588716 sshd[1787]: Accepted publickey for core from 139.178.89.65 port 59332 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:28.591376 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:28.597222 systemd-logind[1451]: New session 4 of user core. Dec 13 09:08:28.607161 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 09:08:29.273532 sshd[1787]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:29.278498 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Dec 13 09:08:29.278873 systemd[1]: sshd@4-188.245.82.140:22-139.178.89.65:59332.service: Deactivated successfully. Dec 13 09:08:29.280680 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 09:08:29.281518 systemd-logind[1451]: Removed session 4. Dec 13 09:08:29.446863 systemd[1]: Started sshd@5-188.245.82.140:22-139.178.89.65:39378.service - OpenSSH per-connection server daemon (139.178.89.65:39378). Dec 13 09:08:30.440223 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 39378 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:30.442723 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:30.447182 systemd-logind[1451]: New session 5 of user core. Dec 13 09:08:30.463232 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 09:08:30.971844 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 09:08:30.972145 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:08:30.985187 sudo[1797]: pam_unix(sudo:session): session closed for user root Dec 13 09:08:31.146207 sshd[1794]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:31.151392 systemd[1]: sshd@5-188.245.82.140:22-139.178.89.65:39378.service: Deactivated successfully. Dec 13 09:08:31.153204 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 09:08:31.154488 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Dec 13 09:08:31.155846 systemd-logind[1451]: Removed session 5. Dec 13 09:08:31.317885 systemd[1]: Started sshd@6-188.245.82.140:22-139.178.89.65:39392.service - OpenSSH per-connection server daemon (139.178.89.65:39392). Dec 13 09:08:32.307272 sshd[1802]: Accepted publickey for core from 139.178.89.65 port 39392 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:32.309415 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:32.314064 systemd-logind[1451]: New session 6 of user core. Dec 13 09:08:32.321221 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 09:08:32.835015 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 09:08:32.835307 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:08:32.839322 sudo[1806]: pam_unix(sudo:session): session closed for user root Dec 13 09:08:32.844736 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 09:08:32.845128 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:08:32.864363 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 09:08:32.866557 auditctl[1809]: No rules Dec 13 09:08:32.867097 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 09:08:32.867953 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 09:08:32.875648 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 09:08:32.901089 augenrules[1827]: No rules Dec 13 09:08:32.902782 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 09:08:32.904139 sudo[1805]: pam_unix(sudo:session): session closed for user root Dec 13 09:08:33.065982 sshd[1802]: pam_unix(sshd:session): session closed for user core Dec 13 09:08:33.070292 systemd[1]: sshd@6-188.245.82.140:22-139.178.89.65:39392.service: Deactivated successfully. Dec 13 09:08:33.072591 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 09:08:33.075267 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Dec 13 09:08:33.076523 systemd-logind[1451]: Removed session 6. Dec 13 09:08:33.239407 systemd[1]: Started sshd@7-188.245.82.140:22-139.178.89.65:39394.service - OpenSSH per-connection server daemon (139.178.89.65:39394). Dec 13 09:08:34.048766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 09:08:34.058268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:34.178234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:34.178493 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:08:34.217483 sshd[1835]: Accepted publickey for core from 139.178.89.65 port 39394 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:08:34.220997 kubelet[1845]: E1213 09:08:34.220850 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:08:34.221504 sshd[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:08:34.224176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:08:34.224332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:08:34.229822 systemd-logind[1451]: New session 7 of user core. Dec 13 09:08:34.237237 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 09:08:34.738268 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 09:08:34.738530 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 09:08:35.032950 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 09:08:35.033631 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 09:08:35.277012 dockerd[1869]: time="2024-12-13T09:08:35.276610602Z" level=info msg="Starting up" Dec 13 09:08:35.375759 dockerd[1869]: time="2024-12-13T09:08:35.375355886Z" level=info msg="Loading containers: start." Dec 13 09:08:35.487975 kernel: Initializing XFRM netlink socket Dec 13 09:08:35.568094 systemd-networkd[1375]: docker0: Link UP Dec 13 09:08:35.586569 dockerd[1869]: time="2024-12-13T09:08:35.586479854Z" level=info msg="Loading containers: done." Dec 13 09:08:35.601600 dockerd[1869]: time="2024-12-13T09:08:35.601432753Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 09:08:35.601600 dockerd[1869]: time="2024-12-13T09:08:35.601558075Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 09:08:35.601889 dockerd[1869]: time="2024-12-13T09:08:35.601690276Z" level=info msg="Daemon has completed initialization" Dec 13 09:08:35.642340 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 09:08:35.643165 dockerd[1869]: time="2024-12-13T09:08:35.642704276Z" level=info msg="API listen on /run/docker.sock" Dec 13 09:08:36.657587 containerd[1467]: time="2024-12-13T09:08:36.657264711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 09:08:37.340871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824620442.mount: Deactivated successfully. Dec 13 09:08:38.692760 containerd[1467]: time="2024-12-13T09:08:38.692544089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:38.695062 containerd[1467]: time="2024-12-13T09:08:38.695012323Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615677" Dec 13 09:08:38.695731 containerd[1467]: time="2024-12-13T09:08:38.695578690Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:38.699054 containerd[1467]: time="2024-12-13T09:08:38.698996377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:38.700664 containerd[1467]: time="2024-12-13T09:08:38.700318315Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 2.043009444s" Dec 13 09:08:38.700664 containerd[1467]: time="2024-12-13T09:08:38.700366995Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 09:08:38.701510 containerd[1467]: time="2024-12-13T09:08:38.701375449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 09:08:40.407656 containerd[1467]: time="2024-12-13T09:08:40.407561208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:40.409015 containerd[1467]: time="2024-12-13T09:08:40.408961546Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470116" Dec 13 09:08:40.410476 containerd[1467]: time="2024-12-13T09:08:40.410421125Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:40.413864 containerd[1467]: time="2024-12-13T09:08:40.413820849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:40.415267 containerd[1467]: time="2024-12-13T09:08:40.415128066Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.713713417s" Dec 13 09:08:40.415267 containerd[1467]: time="2024-12-13T09:08:40.415172866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 09:08:40.416504 containerd[1467]: time="2024-12-13T09:08:40.416475323Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 09:08:41.919600 containerd[1467]: time="2024-12-13T09:08:41.919537444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:41.920754 containerd[1467]: time="2024-12-13T09:08:41.920711139Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024222" Dec 13 09:08:41.922039 containerd[1467]: time="2024-12-13T09:08:41.921991835Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:41.924992 containerd[1467]: time="2024-12-13T09:08:41.924938512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:41.926481 containerd[1467]: time="2024-12-13T09:08:41.926155847Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.509643963s" Dec 13 09:08:41.926481 containerd[1467]: time="2024-12-13T09:08:41.926195208Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 09:08:41.927205 containerd[1467]: time="2024-12-13T09:08:41.927181820Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 09:08:42.906006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942652313.mount: Deactivated successfully. Dec 13 09:08:43.215477 containerd[1467]: time="2024-12-13T09:08:43.215342060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:43.217224 containerd[1467]: time="2024-12-13T09:08:43.217186402Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771452" Dec 13 09:08:43.218534 containerd[1467]: time="2024-12-13T09:08:43.218488858Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:43.221280 containerd[1467]: time="2024-12-13T09:08:43.221181730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:43.222020 containerd[1467]: time="2024-12-13T09:08:43.221857258Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.294542076s" Dec 13 09:08:43.222020 containerd[1467]: time="2024-12-13T09:08:43.221894179Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 09:08:43.222765 containerd[1467]: time="2024-12-13T09:08:43.222365224Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 09:08:43.869957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095998913.mount: Deactivated successfully. Dec 13 09:08:44.377990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 09:08:44.387209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:44.535144 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:44.545369 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 09:08:44.589311 kubelet[2128]: E1213 09:08:44.588829 2128 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 09:08:44.591869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 09:08:44.592013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 09:08:44.624294 containerd[1467]: time="2024-12-13T09:08:44.624223466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:44.626263 containerd[1467]: time="2024-12-13T09:08:44.626217129Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Dec 13 09:08:44.627252 containerd[1467]: time="2024-12-13T09:08:44.627140420Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:44.632578 containerd[1467]: time="2024-12-13T09:08:44.632404842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:44.634268 containerd[1467]: time="2024-12-13T09:08:44.634083621Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.411688476s" Dec 13 09:08:44.634268 containerd[1467]: time="2024-12-13T09:08:44.634163502Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 09:08:44.635086 containerd[1467]: time="2024-12-13T09:08:44.634862390Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 09:08:45.159584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006225128.mount: Deactivated successfully. Dec 13 09:08:45.166845 containerd[1467]: time="2024-12-13T09:08:45.166792079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:45.168126 containerd[1467]: time="2024-12-13T09:08:45.168082694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Dec 13 09:08:45.172412 containerd[1467]: time="2024-12-13T09:08:45.172318942Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:45.175885 containerd[1467]: time="2024-12-13T09:08:45.175804782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:45.176696 containerd[1467]: time="2024-12-13T09:08:45.176546630Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 541.649639ms" Dec 13 09:08:45.176696 containerd[1467]: time="2024-12-13T09:08:45.176583031Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 09:08:45.177209 containerd[1467]: time="2024-12-13T09:08:45.177030996Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 09:08:45.751815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785137984.mount: Deactivated successfully. Dec 13 09:08:47.079980 containerd[1467]: time="2024-12-13T09:08:47.079756946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:47.082120 containerd[1467]: time="2024-12-13T09:08:47.082057772Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Dec 13 09:08:47.084062 containerd[1467]: time="2024-12-13T09:08:47.083993113Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:47.089342 containerd[1467]: time="2024-12-13T09:08:47.089252610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:08:47.091935 containerd[1467]: time="2024-12-13T09:08:47.091481354Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.914417038s" Dec 13 09:08:47.091935 containerd[1467]: time="2024-12-13T09:08:47.091534955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 09:08:51.733685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:51.746400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:51.784186 systemd[1]: Reloading requested from client PID 2219 ('systemctl') (unit session-7.scope)... Dec 13 09:08:51.784403 systemd[1]: Reloading... Dec 13 09:08:51.904944 zram_generator::config[2259]: No configuration found. Dec 13 09:08:52.006559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:08:52.075401 systemd[1]: Reloading finished in 289 ms. Dec 13 09:08:52.129802 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 09:08:52.129887 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 09:08:52.130210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:52.136352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:52.248132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:52.255458 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:08:52.300406 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:08:52.300870 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:08:52.300945 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:08:52.301297 kubelet[2307]: I1213 09:08:52.301227 2307 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:08:53.505952 kubelet[2307]: I1213 09:08:53.505244 2307 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 09:08:53.505952 kubelet[2307]: I1213 09:08:53.505304 2307 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:08:53.505952 kubelet[2307]: I1213 09:08:53.505793 2307 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 09:08:53.538327 kubelet[2307]: E1213 09:08:53.538261 2307 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.82.140:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:53.539777 kubelet[2307]: I1213 09:08:53.539689 2307 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:08:53.550928 kubelet[2307]: E1213 09:08:53.550874 2307 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 09:08:53.551163 kubelet[2307]: I1213 09:08:53.551124 2307 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 09:08:53.556046 kubelet[2307]: I1213 09:08:53.556011 2307 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:08:53.558871 kubelet[2307]: I1213 09:08:53.557774 2307 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 09:08:53.558871 kubelet[2307]: I1213 09:08:53.557951 2307 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:08:53.558871 kubelet[2307]: I1213 09:08:53.557978 2307 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-a-d14f804a70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 09:08:53.558871 kubelet[2307]: I1213 09:08:53.558355 2307 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:08:53.559094 kubelet[2307]: I1213 09:08:53.558365 2307 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 09:08:53.559094 kubelet[2307]: I1213 09:08:53.558553 2307 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:08:53.562415 kubelet[2307]: I1213 09:08:53.562373 2307 kubelet.go:408] "Attempting to sync node with API server" Dec 13 09:08:53.562415 kubelet[2307]: I1213 09:08:53.562423 2307 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:08:53.562582 kubelet[2307]: I1213 09:08:53.562453 2307 kubelet.go:314] "Adding apiserver pod source" Dec 13 09:08:53.562582 kubelet[2307]: I1213 09:08:53.562464 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:08:53.566853 kubelet[2307]: W1213 09:08:53.566801 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.82.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a-d14f804a70&limit=500&resourceVersion=0": dial tcp 188.245.82.140:6443: connect: connection refused Dec 13 09:08:53.567057 kubelet[2307]: E1213 09:08:53.567037 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.82.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a-d14f804a70&limit=500&resourceVersion=0\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:53.567259 kubelet[2307]: I1213 09:08:53.567243 2307 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:08:53.569981 kubelet[2307]: I1213 09:08:53.569953 2307 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:08:53.571054 kubelet[2307]: W1213 09:08:53.571032 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 09:08:53.572792 kubelet[2307]: W1213 09:08:53.572180 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.82.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.82.140:6443: connect: connection refused Dec 13 09:08:53.572792 kubelet[2307]: E1213 09:08:53.572235 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.82.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:53.572792 kubelet[2307]: I1213 09:08:53.572517 2307 server.go:1269] "Started kubelet" Dec 13 09:08:53.572928 kubelet[2307]: I1213 09:08:53.572878 2307 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:08:53.574429 kubelet[2307]: I1213 09:08:53.574042 2307 server.go:460] "Adding debug handlers to kubelet server" Dec 13 09:08:53.575390 kubelet[2307]: I1213 09:08:53.575330 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:08:53.575749 kubelet[2307]: I1213 09:08:53.575730 2307 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:08:53.577446 kubelet[2307]: I1213 09:08:53.577414 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:08:53.578329 kubelet[2307]: E1213 09:08:53.576961 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.82.140:6443/api/v1/namespaces/default/events\": dial tcp 188.245.82.140:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-a-d14f804a70.1810b16be95dc87a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-a-d14f804a70,UID:ci-4081-2-1-a-d14f804a70,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-a-d14f804a70,},FirstTimestamp:2024-12-13 09:08:53.572495482 +0000 UTC m=+1.312892079,LastTimestamp:2024-12-13 09:08:53.572495482 +0000 UTC m=+1.312892079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-a-d14f804a70,}" Dec 13 09:08:53.580464 kubelet[2307]: I1213 09:08:53.580252 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 09:08:53.584352 kubelet[2307]: E1213 09:08:53.583461 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:53.584352 kubelet[2307]: I1213 09:08:53.583569 2307 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 09:08:53.584352 kubelet[2307]: I1213 09:08:53.583794 2307 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 09:08:53.584352 kubelet[2307]: I1213 09:08:53.583866 2307 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:08:53.585107 kubelet[2307]: W1213 09:08:53.585062 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.82.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.82.140:6443: connect: connection refused Dec 13 09:08:53.585230 kubelet[2307]: E1213 09:08:53.585211 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.82.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:53.585472 kubelet[2307]: I1213 09:08:53.585452 2307 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:08:53.585626 kubelet[2307]: I1213 09:08:53.585607 2307 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:08:53.586053 kubelet[2307]: E1213 09:08:53.586026 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.82.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a-d14f804a70?timeout=10s\": dial tcp 188.245.82.140:6443: connect: connection refused" interval="200ms" Dec 13 09:08:53.586231 kubelet[2307]: E1213 09:08:53.586214 2307 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 09:08:53.588314 kubelet[2307]: I1213 09:08:53.588291 2307 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:08:53.598474 kubelet[2307]: I1213 09:08:53.598428 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:08:53.599601 kubelet[2307]: I1213 09:08:53.599580 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:08:53.599761 kubelet[2307]: I1213 09:08:53.599747 2307 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:08:53.599834 kubelet[2307]: I1213 09:08:53.599825 2307 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 09:08:53.599940 kubelet[2307]: E1213 09:08:53.599922 2307 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:08:53.612635 kubelet[2307]: W1213 09:08:53.612564 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.82.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.82.140:6443: connect: connection refused Dec 13 09:08:53.612973 kubelet[2307]: E1213 09:08:53.612889 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.82.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:53.623491 kubelet[2307]: I1213 09:08:53.623463 2307 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:08:53.623953 kubelet[2307]: I1213 09:08:53.623939 2307 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:08:53.624033 kubelet[2307]: I1213 09:08:53.624023 2307 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:08:53.626375 kubelet[2307]: I1213 09:08:53.626348 2307 policy_none.go:49] "None policy: Start" Dec 13 09:08:53.627575 kubelet[2307]: I1213 09:08:53.627239 2307 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:08:53.627575 kubelet[2307]: I1213 09:08:53.627265 2307 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:08:53.635430 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 09:08:53.659732 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 09:08:53.676247 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 09:08:53.678657 kubelet[2307]: I1213 09:08:53.678604 2307 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:08:53.678893 kubelet[2307]: I1213 09:08:53.678862 2307 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 09:08:53.679373 kubelet[2307]: I1213 09:08:53.678886 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:08:53.680456 kubelet[2307]: I1213 09:08:53.679603 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:08:53.681568 kubelet[2307]: E1213 09:08:53.681541 2307 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:53.714087 systemd[1]: Created slice kubepods-burstable-pod0a8e3d57a925655e64ee09e5b053337c.slice - libcontainer container kubepods-burstable-pod0a8e3d57a925655e64ee09e5b053337c.slice. Dec 13 09:08:53.738236 systemd[1]: Created slice kubepods-burstable-podd6ce8ce2403e81865eb433fc3f43b573.slice - libcontainer container kubepods-burstable-podd6ce8ce2403e81865eb433fc3f43b573.slice. Dec 13 09:08:53.747239 systemd[1]: Created slice kubepods-burstable-podc18ca3a1c00de3048c271c9cdf887966.slice - libcontainer container kubepods-burstable-podc18ca3a1c00de3048c271c9cdf887966.slice. Dec 13 09:08:53.782036 kubelet[2307]: I1213 09:08:53.781689 2307 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.784514 kubelet[2307]: E1213 09:08:53.784457 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.82.140:6443/api/v1/nodes\": dial tcp 188.245.82.140:6443: connect: connection refused" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.787134 kubelet[2307]: E1213 09:08:53.787067 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.82.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a-d14f804a70?timeout=10s\": dial tcp 188.245.82.140:6443: connect: connection refused" interval="400ms" Dec 13 09:08:53.884534 kubelet[2307]: I1213 09:08:53.884445 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.884534 kubelet[2307]: I1213 09:08:53.884499 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.884534 kubelet[2307]: I1213 09:08:53.884523 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c18ca3a1c00de3048c271c9cdf887966-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-a-d14f804a70\" (UID: \"c18ca3a1c00de3048c271c9cdf887966\") " pod="kube-system/kube-scheduler-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.884534 kubelet[2307]: I1213 09:08:53.884542 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a8e3d57a925655e64ee09e5b053337c-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-a-d14f804a70\" (UID: \"0a8e3d57a925655e64ee09e5b053337c\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.884806 kubelet[2307]: I1213 09:08:53.884558 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a8e3d57a925655e64ee09e5b053337c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-a-d14f804a70\" (UID: \"0a8e3d57a925655e64ee09e5b053337c\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.884806 kubelet[2307]: I1213 09:08:53.884577 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.884806 kubelet[2307]: I1213 09:08:53.884592 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.884806 kubelet[2307]: I1213 09:08:53.884609 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a8e3d57a925655e64ee09e5b053337c-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-a-d14f804a70\" (UID: \"0a8e3d57a925655e64ee09e5b053337c\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.884806 kubelet[2307]: I1213 09:08:53.884624 2307 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.988403 kubelet[2307]: I1213 09:08:53.987336 2307 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:53.988403 kubelet[2307]: E1213 09:08:53.987739 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.82.140:6443/api/v1/nodes\": dial tcp 188.245.82.140:6443: connect: connection refused" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:54.034508 containerd[1467]: time="2024-12-13T09:08:54.034359611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-a-d14f804a70,Uid:0a8e3d57a925655e64ee09e5b053337c,Namespace:kube-system,Attempt:0,}" Dec 13 09:08:54.041646 containerd[1467]: time="2024-12-13T09:08:54.041594878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-a-d14f804a70,Uid:d6ce8ce2403e81865eb433fc3f43b573,Namespace:kube-system,Attempt:0,}" Dec 13 09:08:54.051218 containerd[1467]: time="2024-12-13T09:08:54.050809964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-a-d14f804a70,Uid:c18ca3a1c00de3048c271c9cdf887966,Namespace:kube-system,Attempt:0,}" Dec 13 09:08:54.188691 kubelet[2307]: E1213 09:08:54.188613 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.82.140:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-a-d14f804a70?timeout=10s\": dial tcp 188.245.82.140:6443: connect: connection refused" interval="800ms" Dec 13 09:08:54.392838 kubelet[2307]: I1213 09:08:54.392501 2307 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:54.393034 kubelet[2307]: E1213 09:08:54.392982 2307 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.82.140:6443/api/v1/nodes\": dial tcp 188.245.82.140:6443: connect: connection refused" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:54.507787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167322958.mount: Deactivated successfully. Dec 13 09:08:54.516589 containerd[1467]: time="2024-12-13T09:08:54.516345797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:08:54.517917 containerd[1467]: time="2024-12-13T09:08:54.517876691Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:08:54.519410 containerd[1467]: time="2024-12-13T09:08:54.519360385Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 09:08:54.520063 containerd[1467]: time="2024-12-13T09:08:54.520024071Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:08:54.521143 containerd[1467]: time="2024-12-13T09:08:54.521056200Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:08:54.522214 containerd[1467]: time="2024-12-13T09:08:54.522126570Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 09:08:54.525018 containerd[1467]: time="2024-12-13T09:08:54.524980797Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 490.496865ms" Dec 13 09:08:54.526479 containerd[1467]: time="2024-12-13T09:08:54.526441130Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:08:54.528043 containerd[1467]: time="2024-12-13T09:08:54.528007505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 477.106221ms" Dec 13 09:08:54.530134 containerd[1467]: time="2024-12-13T09:08:54.529937523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 09:08:54.531688 containerd[1467]: time="2024-12-13T09:08:54.531657619Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 489.981019ms" Dec 13 09:08:54.646961 containerd[1467]: time="2024-12-13T09:08:54.646545803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:08:54.646961 containerd[1467]: time="2024-12-13T09:08:54.646585123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:08:54.646961 containerd[1467]: time="2024-12-13T09:08:54.646635284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:08:54.646961 containerd[1467]: time="2024-12-13T09:08:54.646756525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:08:54.647657 containerd[1467]: time="2024-12-13T09:08:54.646412962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:08:54.647657 containerd[1467]: time="2024-12-13T09:08:54.646466082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:08:54.647657 containerd[1467]: time="2024-12-13T09:08:54.646476842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:08:54.647657 containerd[1467]: time="2024-12-13T09:08:54.646553763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:08:54.648791 containerd[1467]: time="2024-12-13T09:08:54.648670423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:08:54.648921 containerd[1467]: time="2024-12-13T09:08:54.648828104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:08:54.649218 containerd[1467]: time="2024-12-13T09:08:54.648858064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:08:54.649386 containerd[1467]: time="2024-12-13T09:08:54.649293388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:08:54.674286 systemd[1]: Started cri-containerd-14f8728fe0dbf3a67e93c72d86814e4b2062dcf4ec6cdf382e4cbc65e60aa702.scope - libcontainer container 14f8728fe0dbf3a67e93c72d86814e4b2062dcf4ec6cdf382e4cbc65e60aa702. Dec 13 09:08:54.676568 systemd[1]: Started cri-containerd-21b12d3ba01ccf51ee63bddebf1395a6c6f261052e15a49241c7f7480ee80929.scope - libcontainer container 21b12d3ba01ccf51ee63bddebf1395a6c6f261052e15a49241c7f7480ee80929. Dec 13 09:08:54.684401 systemd[1]: Started cri-containerd-f66ea0adc3e9b058f9a17bb8300fbb525e59f064392339905b596be5afe98333.scope - libcontainer container f66ea0adc3e9b058f9a17bb8300fbb525e59f064392339905b596be5afe98333. Dec 13 09:08:54.726989 containerd[1467]: time="2024-12-13T09:08:54.726952788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-a-d14f804a70,Uid:d6ce8ce2403e81865eb433fc3f43b573,Namespace:kube-system,Attempt:0,} returns sandbox id \"14f8728fe0dbf3a67e93c72d86814e4b2062dcf4ec6cdf382e4cbc65e60aa702\"" Dec 13 09:08:54.731342 containerd[1467]: time="2024-12-13T09:08:54.731117706Z" level=info msg="CreateContainer within sandbox \"14f8728fe0dbf3a67e93c72d86814e4b2062dcf4ec6cdf382e4cbc65e60aa702\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 09:08:54.741600 kubelet[2307]: W1213 09:08:54.741481 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.82.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.82.140:6443: connect: connection refused Dec 13 09:08:54.741600 kubelet[2307]: E1213 09:08:54.741568 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.82.140:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:54.742323 containerd[1467]: time="2024-12-13T09:08:54.742278090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-a-d14f804a70,Uid:0a8e3d57a925655e64ee09e5b053337c,Namespace:kube-system,Attempt:0,} returns sandbox id \"21b12d3ba01ccf51ee63bddebf1395a6c6f261052e15a49241c7f7480ee80929\"" Dec 13 09:08:54.747322 containerd[1467]: time="2024-12-13T09:08:54.747284256Z" level=info msg="CreateContainer within sandbox \"21b12d3ba01ccf51ee63bddebf1395a6c6f261052e15a49241c7f7480ee80929\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 09:08:54.748850 containerd[1467]: time="2024-12-13T09:08:54.748801950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-a-d14f804a70,Uid:c18ca3a1c00de3048c271c9cdf887966,Namespace:kube-system,Attempt:0,} returns sandbox id \"f66ea0adc3e9b058f9a17bb8300fbb525e59f064392339905b596be5afe98333\"" Dec 13 09:08:54.752210 containerd[1467]: time="2024-12-13T09:08:54.752175062Z" level=info msg="CreateContainer within sandbox \"14f8728fe0dbf3a67e93c72d86814e4b2062dcf4ec6cdf382e4cbc65e60aa702\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61\"" Dec 13 09:08:54.760521 containerd[1467]: time="2024-12-13T09:08:54.760445338Z" level=info msg="CreateContainer within sandbox \"21b12d3ba01ccf51ee63bddebf1395a6c6f261052e15a49241c7f7480ee80929\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"93ae2ca957c565313a2ddf25f203b918beafa5c61f65dec1e0c52ae61eb85faf\"" Dec 13 09:08:54.763748 containerd[1467]: time="2024-12-13T09:08:54.763674568Z" level=info msg="StartContainer for \"93ae2ca957c565313a2ddf25f203b918beafa5c61f65dec1e0c52ae61eb85faf\"" Dec 13 09:08:54.764409 containerd[1467]: time="2024-12-13T09:08:54.763829289Z" level=info msg="StartContainer for \"2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61\"" Dec 13 09:08:54.769394 kubelet[2307]: W1213 09:08:54.769245 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.82.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.82.140:6443: connect: connection refused Dec 13 09:08:54.769394 kubelet[2307]: E1213 09:08:54.769315 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.82.140:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:54.770788 containerd[1467]: time="2024-12-13T09:08:54.770650433Z" level=info msg="CreateContainer within sandbox \"f66ea0adc3e9b058f9a17bb8300fbb525e59f064392339905b596be5afe98333\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 09:08:54.789832 containerd[1467]: time="2024-12-13T09:08:54.789759330Z" level=info msg="CreateContainer within sandbox \"f66ea0adc3e9b058f9a17bb8300fbb525e59f064392339905b596be5afe98333\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ade806ca1f8a89f3ee7a10e265fa1aaf20371c32de63db1df7d8aba7bf4b3f02\"" Dec 13 09:08:54.790644 containerd[1467]: time="2024-12-13T09:08:54.790606778Z" level=info msg="StartContainer for \"ade806ca1f8a89f3ee7a10e265fa1aaf20371c32de63db1df7d8aba7bf4b3f02\"" Dec 13 09:08:54.800147 systemd[1]: Started cri-containerd-2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61.scope - libcontainer container 2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61. Dec 13 09:08:54.801661 systemd[1]: Started cri-containerd-93ae2ca957c565313a2ddf25f203b918beafa5c61f65dec1e0c52ae61eb85faf.scope - libcontainer container 93ae2ca957c565313a2ddf25f203b918beafa5c61f65dec1e0c52ae61eb85faf. Dec 13 09:08:54.828094 systemd[1]: Started cri-containerd-ade806ca1f8a89f3ee7a10e265fa1aaf20371c32de63db1df7d8aba7bf4b3f02.scope - libcontainer container ade806ca1f8a89f3ee7a10e265fa1aaf20371c32de63db1df7d8aba7bf4b3f02. Dec 13 09:08:54.854046 containerd[1467]: time="2024-12-13T09:08:54.852770513Z" level=info msg="StartContainer for \"93ae2ca957c565313a2ddf25f203b918beafa5c61f65dec1e0c52ae61eb85faf\" returns successfully" Dec 13 09:08:54.880498 containerd[1467]: time="2024-12-13T09:08:54.880278448Z" level=info msg="StartContainer for \"2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61\" returns successfully" Dec 13 09:08:54.892241 kubelet[2307]: W1213 09:08:54.891874 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.82.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.82.140:6443: connect: connection refused Dec 13 09:08:54.892241 kubelet[2307]: E1213 09:08:54.891959 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.82.140:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:54.894343 containerd[1467]: time="2024-12-13T09:08:54.894280858Z" level=info msg="StartContainer for \"ade806ca1f8a89f3ee7a10e265fa1aaf20371c32de63db1df7d8aba7bf4b3f02\" returns successfully" Dec 13 09:08:54.896109 kubelet[2307]: W1213 09:08:54.896052 2307 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.82.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a-d14f804a70&limit=500&resourceVersion=0": dial tcp 188.245.82.140:6443: connect: connection refused Dec 13 09:08:54.896222 kubelet[2307]: E1213 09:08:54.896118 2307 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.82.140:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-a-d14f804a70&limit=500&resourceVersion=0\": dial tcp 188.245.82.140:6443: connect: connection refused" logger="UnhandledError" Dec 13 09:08:55.197014 kubelet[2307]: I1213 09:08:55.195287 2307 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:56.841139 kubelet[2307]: E1213 09:08:56.841081 2307 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-a-d14f804a70\" not found" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:57.001034 kubelet[2307]: I1213 09:08:57.000808 2307 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:57.001034 kubelet[2307]: E1213 09:08:57.000857 2307 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-2-1-a-d14f804a70\": node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.036794 kubelet[2307]: E1213 09:08:57.036758 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.137364 kubelet[2307]: E1213 09:08:57.137316 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.237899 kubelet[2307]: E1213 09:08:57.237820 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.338556 kubelet[2307]: E1213 09:08:57.338486 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.439301 kubelet[2307]: E1213 09:08:57.439104 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.540317 kubelet[2307]: E1213 09:08:57.540255 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.640819 kubelet[2307]: E1213 09:08:57.640726 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.741724 kubelet[2307]: E1213 09:08:57.741545 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:57.842813 kubelet[2307]: E1213 09:08:57.842552 2307 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:58.575285 kubelet[2307]: I1213 09:08:58.575219 2307 apiserver.go:52] "Watching apiserver" Dec 13 09:08:58.584717 kubelet[2307]: I1213 09:08:58.584651 2307 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 09:08:59.211534 systemd[1]: Reloading requested from client PID 2583 ('systemctl') (unit session-7.scope)... Dec 13 09:08:59.211562 systemd[1]: Reloading... Dec 13 09:08:59.307950 zram_generator::config[2623]: No configuration found. Dec 13 09:08:59.416116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 09:08:59.498194 systemd[1]: Reloading finished in 285 ms. Dec 13 09:08:59.541711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:59.553366 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 09:08:59.553848 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:59.553948 systemd[1]: kubelet.service: Consumed 1.685s CPU time, 115.0M memory peak, 0B memory swap peak. Dec 13 09:08:59.564496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 09:08:59.674512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 09:08:59.689295 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 09:08:59.739011 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:08:59.739011 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 09:08:59.739011 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 09:08:59.739011 kubelet[2668]: I1213 09:08:59.738879 2668 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 09:08:59.746428 kubelet[2668]: I1213 09:08:59.746379 2668 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 09:08:59.746428 kubelet[2668]: I1213 09:08:59.746415 2668 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 09:08:59.746665 kubelet[2668]: I1213 09:08:59.746649 2668 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 09:08:59.748667 kubelet[2668]: I1213 09:08:59.748564 2668 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 09:08:59.750984 kubelet[2668]: I1213 09:08:59.750941 2668 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 09:08:59.754289 kubelet[2668]: E1213 09:08:59.754236 2668 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 09:08:59.754289 kubelet[2668]: I1213 09:08:59.754285 2668 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 09:08:59.760096 kubelet[2668]: I1213 09:08:59.760009 2668 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 09:08:59.760254 kubelet[2668]: I1213 09:08:59.760202 2668 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 09:08:59.760337 kubelet[2668]: I1213 09:08:59.760299 2668 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 09:08:59.760611 kubelet[2668]: I1213 09:08:59.760325 2668 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-a-d14f804a70","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 09:08:59.760611 kubelet[2668]: I1213 09:08:59.760594 2668 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 09:08:59.760611 kubelet[2668]: I1213 09:08:59.760604 2668 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 09:08:59.760975 kubelet[2668]: I1213 09:08:59.760643 2668 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:08:59.760975 kubelet[2668]: I1213 09:08:59.760793 2668 kubelet.go:408] "Attempting to sync node with API server" Dec 13 09:08:59.760975 kubelet[2668]: I1213 09:08:59.760811 2668 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 09:08:59.760975 kubelet[2668]: I1213 09:08:59.760835 2668 kubelet.go:314] "Adding apiserver pod source" Dec 13 09:08:59.760975 kubelet[2668]: I1213 09:08:59.760846 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 09:08:59.770282 kubelet[2668]: I1213 09:08:59.767298 2668 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 09:08:59.770282 kubelet[2668]: I1213 09:08:59.767827 2668 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 09:08:59.770282 kubelet[2668]: I1213 09:08:59.768234 2668 server.go:1269] "Started kubelet" Dec 13 09:08:59.770282 kubelet[2668]: I1213 09:08:59.768433 2668 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 09:08:59.770282 kubelet[2668]: I1213 09:08:59.768531 2668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 09:08:59.770282 kubelet[2668]: I1213 09:08:59.768811 2668 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 09:08:59.770282 kubelet[2668]: I1213 09:08:59.769384 2668 server.go:460] "Adding debug handlers to kubelet server" Dec 13 09:08:59.774066 kubelet[2668]: I1213 09:08:59.772472 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 09:08:59.786742 kubelet[2668]: I1213 09:08:59.786675 2668 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 09:08:59.788362 kubelet[2668]: I1213 09:08:59.788282 2668 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 09:08:59.788612 kubelet[2668]: E1213 09:08:59.788510 2668 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-a-d14f804a70\" not found" Dec 13 09:08:59.790635 kubelet[2668]: I1213 09:08:59.790609 2668 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 09:08:59.790867 kubelet[2668]: I1213 09:08:59.790778 2668 reconciler.go:26] "Reconciler: start to sync state" Dec 13 09:08:59.794066 kubelet[2668]: I1213 09:08:59.794042 2668 factory.go:221] Registration of the systemd container factory successfully Dec 13 09:08:59.794919 kubelet[2668]: I1213 09:08:59.794152 2668 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 09:08:59.803013 kubelet[2668]: I1213 09:08:59.802977 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 09:08:59.804689 kubelet[2668]: I1213 09:08:59.804663 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 09:08:59.804858 kubelet[2668]: I1213 09:08:59.804847 2668 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 09:08:59.804940 kubelet[2668]: I1213 09:08:59.804930 2668 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 09:08:59.805060 kubelet[2668]: E1213 09:08:59.805043 2668 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 09:08:59.805188 kubelet[2668]: I1213 09:08:59.805162 2668 factory.go:221] Registration of the containerd container factory successfully Dec 13 09:08:59.854658 kubelet[2668]: I1213 09:08:59.854622 2668 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 09:08:59.854658 kubelet[2668]: I1213 09:08:59.854644 2668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 09:08:59.854658 kubelet[2668]: I1213 09:08:59.854667 2668 state_mem.go:36] "Initialized new in-memory state store" Dec 13 09:08:59.854989 kubelet[2668]: I1213 09:08:59.854878 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 09:08:59.854989 kubelet[2668]: I1213 09:08:59.854891 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 09:08:59.854989 kubelet[2668]: I1213 09:08:59.854936 2668 policy_none.go:49] "None policy: Start" Dec 13 09:08:59.855665 kubelet[2668]: I1213 09:08:59.855642 2668 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 09:08:59.856000 kubelet[2668]: I1213 09:08:59.855982 2668 state_mem.go:35] "Initializing new in-memory state store" Dec 13 09:08:59.856519 kubelet[2668]: I1213 09:08:59.856399 2668 state_mem.go:75] "Updated machine memory state" Dec 13 09:08:59.861448 kubelet[2668]: I1213 09:08:59.861420 2668 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 09:08:59.862268 kubelet[2668]: I1213 09:08:59.861881 2668 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 09:08:59.862268 kubelet[2668]: I1213 09:08:59.861902 2668 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 09:08:59.862268 kubelet[2668]: I1213 09:08:59.862170 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 09:08:59.966366 kubelet[2668]: I1213 09:08:59.965926 2668 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.975688 kubelet[2668]: I1213 09:08:59.975286 2668 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.975688 kubelet[2668]: I1213 09:08:59.975387 2668 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.991848 kubelet[2668]: I1213 09:08:59.991798 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.992516 kubelet[2668]: I1213 09:08:59.992151 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.992516 kubelet[2668]: I1213 09:08:59.992213 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.992516 kubelet[2668]: I1213 09:08:59.992254 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c18ca3a1c00de3048c271c9cdf887966-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-a-d14f804a70\" (UID: \"c18ca3a1c00de3048c271c9cdf887966\") " pod="kube-system/kube-scheduler-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.992516 kubelet[2668]: I1213 09:08:59.992304 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a8e3d57a925655e64ee09e5b053337c-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-a-d14f804a70\" (UID: \"0a8e3d57a925655e64ee09e5b053337c\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.992516 kubelet[2668]: I1213 09:08:59.992343 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a8e3d57a925655e64ee09e5b053337c-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-a-d14f804a70\" (UID: \"0a8e3d57a925655e64ee09e5b053337c\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.992867 kubelet[2668]: I1213 09:08:59.992378 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.992867 kubelet[2668]: I1213 09:08:59.992423 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a8e3d57a925655e64ee09e5b053337c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-a-d14f804a70\" (UID: \"0a8e3d57a925655e64ee09e5b053337c\") " pod="kube-system/kube-apiserver-ci-4081-2-1-a-d14f804a70" Dec 13 09:08:59.992867 kubelet[2668]: I1213 09:08:59.992458 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d6ce8ce2403e81865eb433fc3f43b573-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-a-d14f804a70\" (UID: \"d6ce8ce2403e81865eb433fc3f43b573\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" Dec 13 09:09:00.207482 sudo[2700]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 09:09:00.208214 sudo[2700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 09:09:00.660294 sudo[2700]: pam_unix(sudo:session): session closed for user root Dec 13 09:09:00.763458 kubelet[2668]: I1213 09:09:00.763413 2668 apiserver.go:52] "Watching apiserver" Dec 13 09:09:00.791443 kubelet[2668]: I1213 09:09:00.791393 2668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 09:09:00.881372 kubelet[2668]: I1213 09:09:00.881289 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-a-d14f804a70" podStartSLOduration=1.881265865 podStartE2EDuration="1.881265865s" podCreationTimestamp="2024-12-13 09:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:09:00.867931677 +0000 UTC m=+1.174128254" watchObservedRunningTime="2024-12-13 09:09:00.881265865 +0000 UTC m=+1.187462442" Dec 13 09:09:00.895922 kubelet[2668]: I1213 09:09:00.893860 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-a-d14f804a70" podStartSLOduration=1.893845448 podStartE2EDuration="1.893845448s" podCreationTimestamp="2024-12-13 09:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:09:00.882373874 +0000 UTC m=+1.188570451" watchObservedRunningTime="2024-12-13 09:09:00.893845448 +0000 UTC m=+1.200042025" Dec 13 09:09:00.910059 kubelet[2668]: I1213 09:09:00.909900 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-a-d14f804a70" podStartSLOduration=1.909883738 podStartE2EDuration="1.909883738s" podCreationTimestamp="2024-12-13 09:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:09:00.894318812 +0000 UTC m=+1.200515349" watchObservedRunningTime="2024-12-13 09:09:00.909883738 +0000 UTC m=+1.216080315" Dec 13 09:09:03.053346 sudo[1853]: pam_unix(sudo:session): session closed for user root Dec 13 09:09:03.213206 sshd[1835]: pam_unix(sshd:session): session closed for user core Dec 13 09:09:03.219305 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Dec 13 09:09:03.220812 systemd[1]: sshd@7-188.245.82.140:22-139.178.89.65:39394.service: Deactivated successfully. Dec 13 09:09:03.224873 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 09:09:03.226065 systemd[1]: session-7.scope: Consumed 7.168s CPU time, 151.1M memory peak, 0B memory swap peak. Dec 13 09:09:03.228866 systemd-logind[1451]: Removed session 7. Dec 13 09:09:05.615410 kubelet[2668]: I1213 09:09:05.615336 2668 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 09:09:05.615994 containerd[1467]: time="2024-12-13T09:09:05.615778429Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 09:09:05.616292 kubelet[2668]: I1213 09:09:05.616172 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 09:09:05.694199 systemd[1]: Created slice kubepods-besteffort-pod672f3225_6dee_48b3_bac2_122128b97bd2.slice - libcontainer container kubepods-besteffort-pod672f3225_6dee_48b3_bac2_122128b97bd2.slice. Dec 13 09:09:05.717154 systemd[1]: Created slice kubepods-burstable-pod883ff6a8_3f4a_4057_8ccc_9e0d3073e33d.slice - libcontainer container kubepods-burstable-pod883ff6a8_3f4a_4057_8ccc_9e0d3073e33d.slice. Dec 13 09:09:05.727328 kubelet[2668]: I1213 09:09:05.727211 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-clustermesh-secrets\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.727485 kubelet[2668]: I1213 09:09:05.727363 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-hubble-tls\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.727485 kubelet[2668]: I1213 09:09:05.727452 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-hostproc\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.727553 kubelet[2668]: I1213 09:09:05.727526 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-lib-modules\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.727629 kubelet[2668]: I1213 09:09:05.727605 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-host-proc-sys-kernel\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.727792 kubelet[2668]: I1213 09:09:05.727719 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/672f3225-6dee-48b3-bac2-122128b97bd2-kube-proxy\") pod \"kube-proxy-fzsfx\" (UID: \"672f3225-6dee-48b3-bac2-122128b97bd2\") " pod="kube-system/kube-proxy-fzsfx" Dec 13 09:09:05.727849 kubelet[2668]: I1213 09:09:05.727824 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j54jt\" (UniqueName: \"kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-kube-api-access-j54jt\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.727883 kubelet[2668]: I1213 09:09:05.727866 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-etc-cni-netd\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.728720 kubelet[2668]: I1213 09:09:05.727971 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/672f3225-6dee-48b3-bac2-122128b97bd2-lib-modules\") pod \"kube-proxy-fzsfx\" (UID: \"672f3225-6dee-48b3-bac2-122128b97bd2\") " pod="kube-system/kube-proxy-fzsfx" Dec 13 09:09:05.728720 kubelet[2668]: I1213 09:09:05.728047 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cni-path\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.728720 kubelet[2668]: I1213 09:09:05.728082 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/672f3225-6dee-48b3-bac2-122128b97bd2-xtables-lock\") pod \"kube-proxy-fzsfx\" (UID: \"672f3225-6dee-48b3-bac2-122128b97bd2\") " pod="kube-system/kube-proxy-fzsfx" Dec 13 09:09:05.728720 kubelet[2668]: I1213 09:09:05.728145 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-xtables-lock\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.728720 kubelet[2668]: I1213 09:09:05.728219 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-config-path\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.728720 kubelet[2668]: I1213 09:09:05.728253 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-bpf-maps\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.729284 kubelet[2668]: I1213 09:09:05.728337 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlqzc\" (UniqueName: \"kubernetes.io/projected/672f3225-6dee-48b3-bac2-122128b97bd2-kube-api-access-xlqzc\") pod \"kube-proxy-fzsfx\" (UID: \"672f3225-6dee-48b3-bac2-122128b97bd2\") " pod="kube-system/kube-proxy-fzsfx" Dec 13 09:09:05.729284 kubelet[2668]: I1213 09:09:05.728409 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-run\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.729284 kubelet[2668]: I1213 09:09:05.728490 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-cgroup\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.729284 kubelet[2668]: I1213 09:09:05.728590 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-host-proc-sys-net\") pod \"cilium-299jd\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " pod="kube-system/cilium-299jd" Dec 13 09:09:05.849345 kubelet[2668]: E1213 09:09:05.849276 2668 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 09:09:05.849345 kubelet[2668]: E1213 09:09:05.849318 2668 projected.go:194] Error preparing data for projected volume kube-api-access-j54jt for pod kube-system/cilium-299jd: configmap "kube-root-ca.crt" not found Dec 13 09:09:05.849480 kubelet[2668]: E1213 09:09:05.849379 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-kube-api-access-j54jt podName:883ff6a8-3f4a-4057-8ccc-9e0d3073e33d nodeName:}" failed. No retries permitted until 2024-12-13 09:09:06.349357787 +0000 UTC m=+6.655554324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j54jt" (UniqueName: "kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-kube-api-access-j54jt") pod "cilium-299jd" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d") : configmap "kube-root-ca.crt" not found Dec 13 09:09:05.857842 kubelet[2668]: E1213 09:09:05.857741 2668 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 09:09:05.857842 kubelet[2668]: E1213 09:09:05.857778 2668 projected.go:194] Error preparing data for projected volume kube-api-access-xlqzc for pod kube-system/kube-proxy-fzsfx: configmap "kube-root-ca.crt" not found Dec 13 09:09:05.857842 kubelet[2668]: E1213 09:09:05.857827 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/672f3225-6dee-48b3-bac2-122128b97bd2-kube-api-access-xlqzc podName:672f3225-6dee-48b3-bac2-122128b97bd2 nodeName:}" failed. No retries permitted until 2024-12-13 09:09:06.357808689 +0000 UTC m=+6.664005266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xlqzc" (UniqueName: "kubernetes.io/projected/672f3225-6dee-48b3-bac2-122128b97bd2-kube-api-access-xlqzc") pod "kube-proxy-fzsfx" (UID: "672f3225-6dee-48b3-bac2-122128b97bd2") : configmap "kube-root-ca.crt" not found Dec 13 09:09:06.605199 containerd[1467]: time="2024-12-13T09:09:06.605029300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fzsfx,Uid:672f3225-6dee-48b3-bac2-122128b97bd2,Namespace:kube-system,Attempt:0,}" Dec 13 09:09:06.625270 containerd[1467]: time="2024-12-13T09:09:06.623597034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-299jd,Uid:883ff6a8-3f4a-4057-8ccc-9e0d3073e33d,Namespace:kube-system,Attempt:0,}" Dec 13 09:09:06.639948 containerd[1467]: time="2024-12-13T09:09:06.639772670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:09:06.639948 containerd[1467]: time="2024-12-13T09:09:06.639931072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:09:06.640315 containerd[1467]: time="2024-12-13T09:09:06.639960912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:06.640315 containerd[1467]: time="2024-12-13T09:09:06.640098313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:06.658325 containerd[1467]: time="2024-12-13T09:09:06.658072963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:09:06.658325 containerd[1467]: time="2024-12-13T09:09:06.658134963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:09:06.658325 containerd[1467]: time="2024-12-13T09:09:06.658158003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:06.660432 containerd[1467]: time="2024-12-13T09:09:06.658241684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:06.664797 systemd[1]: Started cri-containerd-41344343c961a2f282289bf3b1c63092076e5a12e0c12e9c37bee766d5f2b4a9.scope - libcontainer container 41344343c961a2f282289bf3b1c63092076e5a12e0c12e9c37bee766d5f2b4a9. Dec 13 09:09:06.687131 systemd[1]: Started cri-containerd-5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852.scope - libcontainer container 5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852. Dec 13 09:09:06.696523 systemd[1]: Created slice kubepods-besteffort-pod32876adc_22ad_418a_a108_76d3857db0dc.slice - libcontainer container kubepods-besteffort-pod32876adc_22ad_418a_a108_76d3857db0dc.slice. Dec 13 09:09:06.737356 kubelet[2668]: I1213 09:09:06.737241 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649xr\" (UniqueName: \"kubernetes.io/projected/32876adc-22ad-418a-a108-76d3857db0dc-kube-api-access-649xr\") pod \"cilium-operator-5d85765b45-jgc65\" (UID: \"32876adc-22ad-418a-a108-76d3857db0dc\") " pod="kube-system/cilium-operator-5d85765b45-jgc65" Dec 13 09:09:06.737356 kubelet[2668]: I1213 09:09:06.737305 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32876adc-22ad-418a-a108-76d3857db0dc-cilium-config-path\") pod \"cilium-operator-5d85765b45-jgc65\" (UID: \"32876adc-22ad-418a-a108-76d3857db0dc\") " pod="kube-system/cilium-operator-5d85765b45-jgc65" Dec 13 09:09:06.762896 containerd[1467]: time="2024-12-13T09:09:06.762323835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fzsfx,Uid:672f3225-6dee-48b3-bac2-122128b97bd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"41344343c961a2f282289bf3b1c63092076e5a12e0c12e9c37bee766d5f2b4a9\"" Dec 13 09:09:06.770140 containerd[1467]: time="2024-12-13T09:09:06.770090051Z" level=info msg="CreateContainer within sandbox \"41344343c961a2f282289bf3b1c63092076e5a12e0c12e9c37bee766d5f2b4a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 09:09:06.770365 containerd[1467]: time="2024-12-13T09:09:06.770276092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-299jd,Uid:883ff6a8-3f4a-4057-8ccc-9e0d3073e33d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\"" Dec 13 09:09:06.774437 containerd[1467]: time="2024-12-13T09:09:06.774019319Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 09:09:06.808759 containerd[1467]: time="2024-12-13T09:09:06.808659649Z" level=info msg="CreateContainer within sandbox \"41344343c961a2f282289bf3b1c63092076e5a12e0c12e9c37bee766d5f2b4a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"94083df3d3a5b8413c5440e45c8906d696b7efd58eb9e804bc0b74722d2dff48\"" Dec 13 09:09:06.810961 containerd[1467]: time="2024-12-13T09:09:06.809553695Z" level=info msg="StartContainer for \"94083df3d3a5b8413c5440e45c8906d696b7efd58eb9e804bc0b74722d2dff48\"" Dec 13 09:09:06.866145 systemd[1]: Started cri-containerd-94083df3d3a5b8413c5440e45c8906d696b7efd58eb9e804bc0b74722d2dff48.scope - libcontainer container 94083df3d3a5b8413c5440e45c8906d696b7efd58eb9e804bc0b74722d2dff48. Dec 13 09:09:06.912359 containerd[1467]: time="2024-12-13T09:09:06.912306956Z" level=info msg="StartContainer for \"94083df3d3a5b8413c5440e45c8906d696b7efd58eb9e804bc0b74722d2dff48\" returns successfully" Dec 13 09:09:07.004024 containerd[1467]: time="2024-12-13T09:09:07.003110971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jgc65,Uid:32876adc-22ad-418a-a108-76d3857db0dc,Namespace:kube-system,Attempt:0,}" Dec 13 09:09:07.040875 containerd[1467]: time="2024-12-13T09:09:07.040684517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:09:07.040875 containerd[1467]: time="2024-12-13T09:09:07.040777638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:09:07.040875 containerd[1467]: time="2024-12-13T09:09:07.040790638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:07.041350 containerd[1467]: time="2024-12-13T09:09:07.040981519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:07.072723 systemd[1]: Started cri-containerd-667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09.scope - libcontainer container 667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09. Dec 13 09:09:07.116192 containerd[1467]: time="2024-12-13T09:09:07.116127731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jgc65,Uid:32876adc-22ad-418a-a108-76d3857db0dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\"" Dec 13 09:09:07.893998 kubelet[2668]: I1213 09:09:07.893897 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fzsfx" podStartSLOduration=2.893878954 podStartE2EDuration="2.893878954s" podCreationTimestamp="2024-12-13 09:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:09:07.89339771 +0000 UTC m=+8.199594287" watchObservedRunningTime="2024-12-13 09:09:07.893878954 +0000 UTC m=+8.200075531" Dec 13 09:09:13.529797 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531363200.mount: Deactivated successfully. Dec 13 09:09:15.129164 containerd[1467]: time="2024-12-13T09:09:15.129073660Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:09:15.131834 containerd[1467]: time="2024-12-13T09:09:15.131793277Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650926" Dec 13 09:09:15.132804 containerd[1467]: time="2024-12-13T09:09:15.132745803Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:09:15.136269 containerd[1467]: time="2024-12-13T09:09:15.135667181Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.361602822s" Dec 13 09:09:15.136269 containerd[1467]: time="2024-12-13T09:09:15.135728501Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 09:09:15.138174 containerd[1467]: time="2024-12-13T09:09:15.137726593Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 09:09:15.142441 containerd[1467]: time="2024-12-13T09:09:15.142005699Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 09:09:15.158198 containerd[1467]: time="2024-12-13T09:09:15.158133718Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\"" Dec 13 09:09:15.159124 containerd[1467]: time="2024-12-13T09:09:15.159087444Z" level=info msg="StartContainer for \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\"" Dec 13 09:09:15.189136 systemd[1]: Started cri-containerd-d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca.scope - libcontainer container d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca. Dec 13 09:09:15.220828 containerd[1467]: time="2024-12-13T09:09:15.220776781Z" level=info msg="StartContainer for \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\" returns successfully" Dec 13 09:09:15.236597 systemd[1]: cri-containerd-d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca.scope: Deactivated successfully. Dec 13 09:09:15.384507 containerd[1467]: time="2024-12-13T09:09:15.384071580Z" level=info msg="shim disconnected" id=d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca namespace=k8s.io Dec 13 09:09:15.384507 containerd[1467]: time="2024-12-13T09:09:15.384223220Z" level=warning msg="cleaning up after shim disconnected" id=d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca namespace=k8s.io Dec 13 09:09:15.384507 containerd[1467]: time="2024-12-13T09:09:15.384235941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:09:15.907636 containerd[1467]: time="2024-12-13T09:09:15.907578341Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 09:09:15.920088 containerd[1467]: time="2024-12-13T09:09:15.920037537Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\"" Dec 13 09:09:15.920658 containerd[1467]: time="2024-12-13T09:09:15.920617220Z" level=info msg="StartContainer for \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\"" Dec 13 09:09:15.954270 systemd[1]: Started cri-containerd-62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8.scope - libcontainer container 62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8. Dec 13 09:09:15.986458 containerd[1467]: time="2024-12-13T09:09:15.986400983Z" level=info msg="StartContainer for \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\" returns successfully" Dec 13 09:09:15.999037 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 09:09:15.999273 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:09:15.999347 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:09:16.006303 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 09:09:16.006485 systemd[1]: cri-containerd-62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8.scope: Deactivated successfully. Dec 13 09:09:16.033985 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 09:09:16.036870 containerd[1467]: time="2024-12-13T09:09:16.036814047Z" level=info msg="shim disconnected" id=62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8 namespace=k8s.io Dec 13 09:09:16.037446 containerd[1467]: time="2024-12-13T09:09:16.037226370Z" level=warning msg="cleaning up after shim disconnected" id=62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8 namespace=k8s.io Dec 13 09:09:16.037446 containerd[1467]: time="2024-12-13T09:09:16.037248290Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:09:16.050395 containerd[1467]: time="2024-12-13T09:09:16.050346008Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:09:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:09:16.151600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca-rootfs.mount: Deactivated successfully. Dec 13 09:09:16.913256 containerd[1467]: time="2024-12-13T09:09:16.913217355Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 09:09:16.941008 containerd[1467]: time="2024-12-13T09:09:16.940951361Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\"" Dec 13 09:09:16.941844 containerd[1467]: time="2024-12-13T09:09:16.941755526Z" level=info msg="StartContainer for \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\"" Dec 13 09:09:16.975173 systemd[1]: Started cri-containerd-f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843.scope - libcontainer container f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843. Dec 13 09:09:17.013891 containerd[1467]: time="2024-12-13T09:09:17.013758558Z" level=info msg="StartContainer for \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\" returns successfully" Dec 13 09:09:17.022092 systemd[1]: cri-containerd-f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843.scope: Deactivated successfully. Dec 13 09:09:17.051764 containerd[1467]: time="2024-12-13T09:09:17.051658862Z" level=info msg="shim disconnected" id=f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843 namespace=k8s.io Dec 13 09:09:17.052231 containerd[1467]: time="2024-12-13T09:09:17.051857663Z" level=warning msg="cleaning up after shim disconnected" id=f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843 namespace=k8s.io Dec 13 09:09:17.052231 containerd[1467]: time="2024-12-13T09:09:17.051870423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:09:17.151156 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843-rootfs.mount: Deactivated successfully. Dec 13 09:09:17.604840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562553120.mount: Deactivated successfully. Dec 13 09:09:17.920164 containerd[1467]: time="2024-12-13T09:09:17.919610511Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 09:09:17.936684 containerd[1467]: time="2024-12-13T09:09:17.936549972Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\"" Dec 13 09:09:17.937413 containerd[1467]: time="2024-12-13T09:09:17.937390177Z" level=info msg="StartContainer for \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\"" Dec 13 09:09:17.968258 systemd[1]: Started cri-containerd-ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280.scope - libcontainer container ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280. Dec 13 09:09:17.990518 systemd[1]: cri-containerd-ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280.scope: Deactivated successfully. Dec 13 09:09:17.993262 containerd[1467]: time="2024-12-13T09:09:17.992146580Z" level=info msg="StartContainer for \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\" returns successfully" Dec 13 09:09:18.020407 containerd[1467]: time="2024-12-13T09:09:18.020308905Z" level=info msg="shim disconnected" id=ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280 namespace=k8s.io Dec 13 09:09:18.020678 containerd[1467]: time="2024-12-13T09:09:18.020393745Z" level=warning msg="cleaning up after shim disconnected" id=ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280 namespace=k8s.io Dec 13 09:09:18.020678 containerd[1467]: time="2024-12-13T09:09:18.020445385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:09:18.924003 containerd[1467]: time="2024-12-13T09:09:18.923960997Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 09:09:18.942408 containerd[1467]: time="2024-12-13T09:09:18.942298064Z" level=info msg="CreateContainer within sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\"" Dec 13 09:09:18.946209 containerd[1467]: time="2024-12-13T09:09:18.945139400Z" level=info msg="StartContainer for \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\"" Dec 13 09:09:18.981123 systemd[1]: Started cri-containerd-692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01.scope - libcontainer container 692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01. Dec 13 09:09:19.013960 containerd[1467]: time="2024-12-13T09:09:19.013821438Z" level=info msg="StartContainer for \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\" returns successfully" Dec 13 09:09:19.167195 kubelet[2668]: I1213 09:09:19.167150 2668 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 09:09:19.218634 systemd[1]: Created slice kubepods-burstable-podd097575c_2861_4314_b20e_86cc64b0f14f.slice - libcontainer container kubepods-burstable-podd097575c_2861_4314_b20e_86cc64b0f14f.slice. Dec 13 09:09:19.225487 kubelet[2668]: I1213 09:09:19.224008 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d097575c-2861-4314-b20e-86cc64b0f14f-config-volume\") pod \"coredns-6f6b679f8f-xdj5x\" (UID: \"d097575c-2861-4314-b20e-86cc64b0f14f\") " pod="kube-system/coredns-6f6b679f8f-xdj5x" Dec 13 09:09:19.225487 kubelet[2668]: I1213 09:09:19.224059 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxvb5\" (UniqueName: \"kubernetes.io/projected/d097575c-2861-4314-b20e-86cc64b0f14f-kube-api-access-vxvb5\") pod \"coredns-6f6b679f8f-xdj5x\" (UID: \"d097575c-2861-4314-b20e-86cc64b0f14f\") " pod="kube-system/coredns-6f6b679f8f-xdj5x" Dec 13 09:09:19.225487 kubelet[2668]: I1213 09:09:19.224083 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/181b7cfa-b1d0-4c79-b9c2-88d783cddccc-config-volume\") pod \"coredns-6f6b679f8f-bcnkn\" (UID: \"181b7cfa-b1d0-4c79-b9c2-88d783cddccc\") " pod="kube-system/coredns-6f6b679f8f-bcnkn" Dec 13 09:09:19.225487 kubelet[2668]: I1213 09:09:19.224216 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9n9b\" (UniqueName: \"kubernetes.io/projected/181b7cfa-b1d0-4c79-b9c2-88d783cddccc-kube-api-access-j9n9b\") pod \"coredns-6f6b679f8f-bcnkn\" (UID: \"181b7cfa-b1d0-4c79-b9c2-88d783cddccc\") " pod="kube-system/coredns-6f6b679f8f-bcnkn" Dec 13 09:09:19.231801 systemd[1]: Created slice kubepods-burstable-pod181b7cfa_b1d0_4c79_b9c2_88d783cddccc.slice - libcontainer container kubepods-burstable-pod181b7cfa_b1d0_4c79_b9c2_88d783cddccc.slice. Dec 13 09:09:19.530427 containerd[1467]: time="2024-12-13T09:09:19.530123391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xdj5x,Uid:d097575c-2861-4314-b20e-86cc64b0f14f,Namespace:kube-system,Attempt:0,}" Dec 13 09:09:19.536716 containerd[1467]: time="2024-12-13T09:09:19.536286666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bcnkn,Uid:181b7cfa-b1d0-4c79-b9c2-88d783cddccc,Namespace:kube-system,Attempt:0,}" Dec 13 09:09:19.964839 kubelet[2668]: I1213 09:09:19.964698 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-299jd" podStartSLOduration=6.600540477 podStartE2EDuration="14.964679155s" podCreationTimestamp="2024-12-13 09:09:05 +0000 UTC" firstStartedPulling="2024-12-13 09:09:06.773160113 +0000 UTC m=+7.079356690" lastFinishedPulling="2024-12-13 09:09:15.137298791 +0000 UTC m=+15.443495368" observedRunningTime="2024-12-13 09:09:19.961433177 +0000 UTC m=+20.267629754" watchObservedRunningTime="2024-12-13 09:09:19.964679155 +0000 UTC m=+20.270875732" Dec 13 09:09:19.997732 containerd[1467]: time="2024-12-13T09:09:19.996946660Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:09:19.999150 containerd[1467]: time="2024-12-13T09:09:19.999088312Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138322" Dec 13 09:09:20.000211 containerd[1467]: time="2024-12-13T09:09:20.000161198Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 09:09:20.002750 containerd[1467]: time="2024-12-13T09:09:20.002292450Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.864509016s" Dec 13 09:09:20.002750 containerd[1467]: time="2024-12-13T09:09:20.002356011Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 09:09:20.008030 containerd[1467]: time="2024-12-13T09:09:20.007967362Z" level=info msg="CreateContainer within sandbox \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 09:09:20.023504 containerd[1467]: time="2024-12-13T09:09:20.023424289Z" level=info msg="CreateContainer within sandbox \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\"" Dec 13 09:09:20.024398 containerd[1467]: time="2024-12-13T09:09:20.024342535Z" level=info msg="StartContainer for \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\"" Dec 13 09:09:20.053152 systemd[1]: Started cri-containerd-f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec.scope - libcontainer container f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec. Dec 13 09:09:20.086428 containerd[1467]: time="2024-12-13T09:09:20.086219523Z" level=info msg="StartContainer for \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\" returns successfully" Dec 13 09:09:20.956039 kubelet[2668]: I1213 09:09:20.955950 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-jgc65" podStartSLOduration=2.072445638 podStartE2EDuration="14.955893256s" podCreationTimestamp="2024-12-13 09:09:06 +0000 UTC" firstStartedPulling="2024-12-13 09:09:07.120360561 +0000 UTC m=+7.426557138" lastFinishedPulling="2024-12-13 09:09:20.003808139 +0000 UTC m=+20.310004756" observedRunningTime="2024-12-13 09:09:20.954682609 +0000 UTC m=+21.260879186" watchObservedRunningTime="2024-12-13 09:09:20.955893256 +0000 UTC m=+21.262089833" Dec 13 09:09:24.084541 systemd-networkd[1375]: cilium_host: Link UP Dec 13 09:09:24.086296 systemd-networkd[1375]: cilium_net: Link UP Dec 13 09:09:24.086300 systemd-networkd[1375]: cilium_net: Gained carrier Dec 13 09:09:24.086521 systemd-networkd[1375]: cilium_host: Gained carrier Dec 13 09:09:24.209425 systemd-networkd[1375]: cilium_vxlan: Link UP Dec 13 09:09:24.209431 systemd-networkd[1375]: cilium_vxlan: Gained carrier Dec 13 09:09:24.399076 systemd-networkd[1375]: cilium_host: Gained IPv6LL Dec 13 09:09:24.487125 systemd-networkd[1375]: cilium_net: Gained IPv6LL Dec 13 09:09:24.493965 kernel: NET: Registered PF_ALG protocol family Dec 13 09:09:25.187177 systemd-networkd[1375]: lxc_health: Link UP Dec 13 09:09:25.205410 systemd-networkd[1375]: lxc_health: Gained carrier Dec 13 09:09:25.336307 systemd-networkd[1375]: cilium_vxlan: Gained IPv6LL Dec 13 09:09:25.623420 systemd-networkd[1375]: lxc39acc81686af: Link UP Dec 13 09:09:25.628982 kernel: eth0: renamed from tmp2eb20 Dec 13 09:09:25.641121 systemd-networkd[1375]: lxc39acc81686af: Gained carrier Dec 13 09:09:25.641369 systemd-networkd[1375]: lxc49f1d3fb8980: Link UP Dec 13 09:09:25.651335 kernel: eth0: renamed from tmpb1cd3 Dec 13 09:09:25.658258 systemd-networkd[1375]: lxc49f1d3fb8980: Gained carrier Dec 13 09:09:26.487704 systemd-networkd[1375]: lxc_health: Gained IPv6LL Dec 13 09:09:27.575153 systemd-networkd[1375]: lxc39acc81686af: Gained IPv6LL Dec 13 09:09:27.577994 systemd-networkd[1375]: lxc49f1d3fb8980: Gained IPv6LL Dec 13 09:09:29.568089 containerd[1467]: time="2024-12-13T09:09:29.567646785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:09:29.568413 containerd[1467]: time="2024-12-13T09:09:29.568105067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:09:29.568413 containerd[1467]: time="2024-12-13T09:09:29.568135707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:29.568413 containerd[1467]: time="2024-12-13T09:09:29.568297668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:29.591268 containerd[1467]: time="2024-12-13T09:09:29.590884859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:09:29.591268 containerd[1467]: time="2024-12-13T09:09:29.590973260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:09:29.591268 containerd[1467]: time="2024-12-13T09:09:29.590990420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:29.591268 containerd[1467]: time="2024-12-13T09:09:29.591076020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:09:29.622077 systemd[1]: Started cri-containerd-2eb20404c20c90b2b33a860ecedc1c5e5ab3263f3311ea172e5c395d479ba1e3.scope - libcontainer container 2eb20404c20c90b2b33a860ecedc1c5e5ab3263f3311ea172e5c395d479ba1e3. Dec 13 09:09:29.623238 systemd[1]: Started cri-containerd-b1cd3e686e3497b51bdc2d5ed5435681cd2d683b4a59bb5ccbcdd3bed40d281c.scope - libcontainer container b1cd3e686e3497b51bdc2d5ed5435681cd2d683b4a59bb5ccbcdd3bed40d281c. Dec 13 09:09:29.678207 containerd[1467]: time="2024-12-13T09:09:29.678151289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xdj5x,Uid:d097575c-2861-4314-b20e-86cc64b0f14f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2eb20404c20c90b2b33a860ecedc1c5e5ab3263f3311ea172e5c395d479ba1e3\"" Dec 13 09:09:29.681987 containerd[1467]: time="2024-12-13T09:09:29.681832667Z" level=info msg="CreateContainer within sandbox \"2eb20404c20c90b2b33a860ecedc1c5e5ab3263f3311ea172e5c395d479ba1e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:09:29.708144 containerd[1467]: time="2024-12-13T09:09:29.707938675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-bcnkn,Uid:181b7cfa-b1d0-4c79-b9c2-88d783cddccc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1cd3e686e3497b51bdc2d5ed5435681cd2d683b4a59bb5ccbcdd3bed40d281c\"" Dec 13 09:09:29.709261 containerd[1467]: time="2024-12-13T09:09:29.709216922Z" level=info msg="CreateContainer within sandbox \"2eb20404c20c90b2b33a860ecedc1c5e5ab3263f3311ea172e5c395d479ba1e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e9601cb0733fded434aa3fb5604fc3314d2a2b6d1fcf6b15bf5bf65079c1ec2\"" Dec 13 09:09:29.711490 containerd[1467]: time="2024-12-13T09:09:29.711459013Z" level=info msg="StartContainer for \"9e9601cb0733fded434aa3fb5604fc3314d2a2b6d1fcf6b15bf5bf65079c1ec2\"" Dec 13 09:09:29.715419 containerd[1467]: time="2024-12-13T09:09:29.715222711Z" level=info msg="CreateContainer within sandbox \"b1cd3e686e3497b51bdc2d5ed5435681cd2d683b4a59bb5ccbcdd3bed40d281c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 09:09:29.733560 containerd[1467]: time="2024-12-13T09:09:29.733502281Z" level=info msg="CreateContainer within sandbox \"b1cd3e686e3497b51bdc2d5ed5435681cd2d683b4a59bb5ccbcdd3bed40d281c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c478981ec0695392adaafdcbcb8ecb4243c4f31dfe01b67ab07fbcc7e275926e\"" Dec 13 09:09:29.734238 containerd[1467]: time="2024-12-13T09:09:29.734209965Z" level=info msg="StartContainer for \"c478981ec0695392adaafdcbcb8ecb4243c4f31dfe01b67ab07fbcc7e275926e\"" Dec 13 09:09:29.751233 systemd[1]: Started cri-containerd-9e9601cb0733fded434aa3fb5604fc3314d2a2b6d1fcf6b15bf5bf65079c1ec2.scope - libcontainer container 9e9601cb0733fded434aa3fb5604fc3314d2a2b6d1fcf6b15bf5bf65079c1ec2. Dec 13 09:09:29.784132 systemd[1]: Started cri-containerd-c478981ec0695392adaafdcbcb8ecb4243c4f31dfe01b67ab07fbcc7e275926e.scope - libcontainer container c478981ec0695392adaafdcbcb8ecb4243c4f31dfe01b67ab07fbcc7e275926e. Dec 13 09:09:29.810324 containerd[1467]: time="2024-12-13T09:09:29.810225459Z" level=info msg="StartContainer for \"9e9601cb0733fded434aa3fb5604fc3314d2a2b6d1fcf6b15bf5bf65079c1ec2\" returns successfully" Dec 13 09:09:29.835188 containerd[1467]: time="2024-12-13T09:09:29.834839100Z" level=info msg="StartContainer for \"c478981ec0695392adaafdcbcb8ecb4243c4f31dfe01b67ab07fbcc7e275926e\" returns successfully" Dec 13 09:09:29.980810 kubelet[2668]: I1213 09:09:29.980728 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bcnkn" podStartSLOduration=23.980710218 podStartE2EDuration="23.980710218s" podCreationTimestamp="2024-12-13 09:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:09:29.977778683 +0000 UTC m=+30.283975260" watchObservedRunningTime="2024-12-13 09:09:29.980710218 +0000 UTC m=+30.286906795" Dec 13 09:09:30.031555 kubelet[2668]: I1213 09:09:30.031452 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xdj5x" podStartSLOduration=24.031413065 podStartE2EDuration="24.031413065s" podCreationTimestamp="2024-12-13 09:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:09:30.00361121 +0000 UTC m=+30.309807787" watchObservedRunningTime="2024-12-13 09:09:30.031413065 +0000 UTC m=+30.337609762" Dec 13 09:10:14.621575 systemd[1]: Started sshd@8-188.245.82.140:22-185.255.91.127:52920.service - OpenSSH per-connection server daemon (185.255.91.127:52920). Dec 13 09:10:15.189964 sshd[4043]: Invalid user ftptest from 185.255.91.127 port 52920 Dec 13 09:10:15.294038 sshd[4043]: Received disconnect from 185.255.91.127 port 52920:11: Bye Bye [preauth] Dec 13 09:10:15.294038 sshd[4043]: Disconnected from invalid user ftptest 185.255.91.127 port 52920 [preauth] Dec 13 09:10:15.297612 systemd[1]: sshd@8-188.245.82.140:22-185.255.91.127:52920.service: Deactivated successfully. Dec 13 09:10:20.000428 systemd[1]: Started sshd@9-188.245.82.140:22-95.85.47.10:59064.service - OpenSSH per-connection server daemon (95.85.47.10:59064). Dec 13 09:10:20.188201 sshd[4048]: Invalid user ftptest from 95.85.47.10 port 59064 Dec 13 09:10:20.210962 sshd[4048]: Received disconnect from 95.85.47.10 port 59064:11: Bye Bye [preauth] Dec 13 09:10:20.211168 sshd[4048]: Disconnected from invalid user ftptest 95.85.47.10 port 59064 [preauth] Dec 13 09:10:20.214668 systemd[1]: sshd@9-188.245.82.140:22-95.85.47.10:59064.service: Deactivated successfully. Dec 13 09:11:13.415374 systemd[1]: Started sshd@10-188.245.82.140:22-139.59.63.157:6100.service - OpenSSH per-connection server daemon (139.59.63.157:6100). Dec 13 09:11:16.844342 sshd[4060]: kex_protocol_error: type 20 seq 2 [preauth] Dec 13 09:11:16.844342 sshd[4060]: kex_protocol_error: type 30 seq 3 [preauth] Dec 13 09:11:17.394655 sshd[4060]: kex_protocol_error: type 20 seq 4 [preauth] Dec 13 09:11:17.394655 sshd[4060]: kex_protocol_error: type 30 seq 5 [preauth] Dec 13 09:11:19.413729 sshd[4060]: kex_protocol_error: type 20 seq 6 [preauth] Dec 13 09:11:19.413729 sshd[4060]: kex_protocol_error: type 30 seq 7 [preauth] Dec 13 09:11:45.394824 sshd[4060]: Connection reset by 139.59.63.157 port 6100 [preauth] Dec 13 09:11:45.396512 systemd[1]: sshd@10-188.245.82.140:22-139.59.63.157:6100.service: Deactivated successfully. Dec 13 09:11:55.554447 systemd[1]: Started sshd@11-188.245.82.140:22-106.12.181.81:45288.service - OpenSSH per-connection server daemon (106.12.181.81:45288). Dec 13 09:11:57.956415 systemd[1]: Started sshd@12-188.245.82.140:22-95.85.47.10:37500.service - OpenSSH per-connection server daemon (95.85.47.10:37500). Dec 13 09:11:58.113580 sshd[4073]: Invalid user ftp-user from 95.85.47.10 port 37500 Dec 13 09:11:58.127695 sshd[4073]: Received disconnect from 95.85.47.10 port 37500:11: Bye Bye [preauth] Dec 13 09:11:58.127695 sshd[4073]: Disconnected from invalid user ftp-user 95.85.47.10 port 37500 [preauth] Dec 13 09:11:58.130508 systemd[1]: sshd@12-188.245.82.140:22-95.85.47.10:37500.service: Deactivated successfully. Dec 13 09:12:20.115021 systemd[1]: Started sshd@13-188.245.82.140:22-185.255.91.127:57976.service - OpenSSH per-connection server daemon (185.255.91.127:57976). Dec 13 09:12:20.684270 sshd[4083]: Invalid user test from 185.255.91.127 port 57976 Dec 13 09:12:20.787715 sshd[4083]: Received disconnect from 185.255.91.127 port 57976:11: Bye Bye [preauth] Dec 13 09:12:20.787715 sshd[4083]: Disconnected from invalid user test 185.255.91.127 port 57976 [preauth] Dec 13 09:12:20.789845 systemd[1]: sshd@13-188.245.82.140:22-185.255.91.127:57976.service: Deactivated successfully. Dec 13 09:13:26.554119 systemd[1]: Started sshd@14-188.245.82.140:22-95.85.47.10:44160.service - OpenSSH per-connection server daemon (95.85.47.10:44160). Dec 13 09:13:26.742043 sshd[4096]: Invalid user bitrix from 95.85.47.10 port 44160 Dec 13 09:13:26.763642 sshd[4096]: Received disconnect from 95.85.47.10 port 44160:11: Bye Bye [preauth] Dec 13 09:13:26.763642 sshd[4096]: Disconnected from invalid user bitrix 95.85.47.10 port 44160 [preauth] Dec 13 09:13:26.767322 systemd[1]: sshd@14-188.245.82.140:22-95.85.47.10:44160.service: Deactivated successfully. Dec 13 09:13:49.079360 systemd[1]: Started sshd@15-188.245.82.140:22-139.178.89.65:54126.service - OpenSSH per-connection server daemon (139.178.89.65:54126). Dec 13 09:13:50.073274 sshd[4103]: Accepted publickey for core from 139.178.89.65 port 54126 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:13:50.075204 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:50.081077 systemd-logind[1451]: New session 8 of user core. Dec 13 09:13:50.087202 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 09:13:50.858500 sshd[4103]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:50.863233 systemd[1]: sshd@15-188.245.82.140:22-139.178.89.65:54126.service: Deactivated successfully. Dec 13 09:13:50.866564 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 09:13:50.869577 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Dec 13 09:13:50.871391 systemd-logind[1451]: Removed session 8. Dec 13 09:13:55.570042 systemd[1]: sshd@11-188.245.82.140:22-106.12.181.81:45288.service: Deactivated successfully. Dec 13 09:13:56.036001 systemd[1]: Started sshd@16-188.245.82.140:22-139.178.89.65:54132.service - OpenSSH per-connection server daemon (139.178.89.65:54132). Dec 13 09:13:57.031556 sshd[4119]: Accepted publickey for core from 139.178.89.65 port 54132 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:13:57.034054 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:13:57.039188 systemd-logind[1451]: New session 9 of user core. Dec 13 09:13:57.043126 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 09:13:57.795353 sshd[4119]: pam_unix(sshd:session): session closed for user core Dec 13 09:13:57.800026 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Dec 13 09:13:57.800861 systemd[1]: sshd@16-188.245.82.140:22-139.178.89.65:54132.service: Deactivated successfully. Dec 13 09:13:57.804022 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 09:13:57.805183 systemd-logind[1451]: Removed session 9. Dec 13 09:14:02.968319 systemd[1]: Started sshd@17-188.245.82.140:22-139.178.89.65:40022.service - OpenSSH per-connection server daemon (139.178.89.65:40022). Dec 13 09:14:03.944335 sshd[4136]: Accepted publickey for core from 139.178.89.65 port 40022 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:03.946157 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:03.952064 systemd-logind[1451]: New session 10 of user core. Dec 13 09:14:03.957177 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 09:14:04.694247 sshd[4136]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:04.700099 systemd[1]: sshd@17-188.245.82.140:22-139.178.89.65:40022.service: Deactivated successfully. Dec 13 09:14:04.703742 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 09:14:04.705692 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Dec 13 09:14:04.707987 systemd-logind[1451]: Removed session 10. Dec 13 09:14:04.878605 systemd[1]: Started sshd@18-188.245.82.140:22-139.178.89.65:40024.service - OpenSSH per-connection server daemon (139.178.89.65:40024). Dec 13 09:14:05.859981 sshd[4150]: Accepted publickey for core from 139.178.89.65 port 40024 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:05.861684 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:05.867087 systemd-logind[1451]: New session 11 of user core. Dec 13 09:14:05.873262 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 09:14:06.693112 sshd[4150]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:06.699812 systemd[1]: sshd@18-188.245.82.140:22-139.178.89.65:40024.service: Deactivated successfully. Dec 13 09:14:06.701846 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 09:14:06.704999 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Dec 13 09:14:06.706592 systemd-logind[1451]: Removed session 11. Dec 13 09:14:06.866314 systemd[1]: Started sshd@19-188.245.82.140:22-139.178.89.65:40034.service - OpenSSH per-connection server daemon (139.178.89.65:40034). Dec 13 09:14:07.857733 sshd[4163]: Accepted publickey for core from 139.178.89.65 port 40034 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:07.858839 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:07.864513 systemd-logind[1451]: New session 12 of user core. Dec 13 09:14:07.872314 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 09:14:08.623956 sshd[4163]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:08.628550 systemd[1]: sshd@19-188.245.82.140:22-139.178.89.65:40034.service: Deactivated successfully. Dec 13 09:14:08.630952 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 09:14:08.635347 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Dec 13 09:14:08.637367 systemd-logind[1451]: Removed session 12. Dec 13 09:14:09.767307 systemd[1]: Started sshd@20-188.245.82.140:22-185.255.91.127:33104.service - OpenSSH per-connection server daemon (185.255.91.127:33104). Dec 13 09:14:10.371157 sshd[4178]: Invalid user helena from 185.255.91.127 port 33104 Dec 13 09:14:10.473503 sshd[4178]: Received disconnect from 185.255.91.127 port 33104:11: Bye Bye [preauth] Dec 13 09:14:10.473503 sshd[4178]: Disconnected from invalid user helena 185.255.91.127 port 33104 [preauth] Dec 13 09:14:10.476108 systemd[1]: sshd@20-188.245.82.140:22-185.255.91.127:33104.service: Deactivated successfully. Dec 13 09:14:13.798646 systemd[1]: Started sshd@21-188.245.82.140:22-139.178.89.65:59446.service - OpenSSH per-connection server daemon (139.178.89.65:59446). Dec 13 09:14:14.782405 sshd[4183]: Accepted publickey for core from 139.178.89.65 port 59446 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:14.784606 sshd[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:14.789555 systemd-logind[1451]: New session 13 of user core. Dec 13 09:14:14.802270 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 09:14:15.539179 sshd[4183]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:15.544935 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Dec 13 09:14:15.545922 systemd[1]: sshd@21-188.245.82.140:22-139.178.89.65:59446.service: Deactivated successfully. Dec 13 09:14:15.548029 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 09:14:15.550809 systemd-logind[1451]: Removed session 13. Dec 13 09:14:15.714304 systemd[1]: Started sshd@22-188.245.82.140:22-139.178.89.65:59456.service - OpenSSH per-connection server daemon (139.178.89.65:59456). Dec 13 09:14:16.704763 sshd[4196]: Accepted publickey for core from 139.178.89.65 port 59456 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:16.707152 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:16.717010 systemd-logind[1451]: New session 14 of user core. Dec 13 09:14:16.723710 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 09:14:17.511856 sshd[4196]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:17.516473 systemd[1]: sshd@22-188.245.82.140:22-139.178.89.65:59456.service: Deactivated successfully. Dec 13 09:14:17.518898 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 09:14:17.521182 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Dec 13 09:14:17.523134 systemd-logind[1451]: Removed session 14. Dec 13 09:14:17.691303 systemd[1]: Started sshd@23-188.245.82.140:22-139.178.89.65:59472.service - OpenSSH per-connection server daemon (139.178.89.65:59472). Dec 13 09:14:18.665153 sshd[4208]: Accepted publickey for core from 139.178.89.65 port 59472 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:18.667715 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:18.672965 systemd-logind[1451]: New session 15 of user core. Dec 13 09:14:18.683306 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 09:14:21.027046 sshd[4208]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:21.032572 systemd[1]: sshd@23-188.245.82.140:22-139.178.89.65:59472.service: Deactivated successfully. Dec 13 09:14:21.035569 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 09:14:21.036975 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Dec 13 09:14:21.038011 systemd-logind[1451]: Removed session 15. Dec 13 09:14:21.202998 systemd[1]: Started sshd@24-188.245.82.140:22-139.178.89.65:37348.service - OpenSSH per-connection server daemon (139.178.89.65:37348). Dec 13 09:14:22.172319 sshd[4226]: Accepted publickey for core from 139.178.89.65 port 37348 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:22.174858 sshd[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:22.181671 systemd-logind[1451]: New session 16 of user core. Dec 13 09:14:22.187163 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 09:14:23.037118 sshd[4226]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:23.042442 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Dec 13 09:14:23.042797 systemd[1]: sshd@24-188.245.82.140:22-139.178.89.65:37348.service: Deactivated successfully. Dec 13 09:14:23.046879 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 09:14:23.056554 systemd-logind[1451]: Removed session 16. Dec 13 09:14:23.226655 systemd[1]: Started sshd@25-188.245.82.140:22-139.178.89.65:37354.service - OpenSSH per-connection server daemon (139.178.89.65:37354). Dec 13 09:14:24.211118 sshd[4236]: Accepted publickey for core from 139.178.89.65 port 37354 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:24.214407 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:24.222853 systemd-logind[1451]: New session 17 of user core. Dec 13 09:14:24.228866 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 09:14:24.959234 sshd[4236]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:24.965158 systemd[1]: sshd@25-188.245.82.140:22-139.178.89.65:37354.service: Deactivated successfully. Dec 13 09:14:24.968903 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 09:14:24.970086 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Dec 13 09:14:24.971129 systemd-logind[1451]: Removed session 17. Dec 13 09:14:30.150287 systemd[1]: Started sshd@26-188.245.82.140:22-139.178.89.65:44912.service - OpenSSH per-connection server daemon (139.178.89.65:44912). Dec 13 09:14:31.144262 sshd[4254]: Accepted publickey for core from 139.178.89.65 port 44912 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:31.148720 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:31.154881 systemd-logind[1451]: New session 18 of user core. Dec 13 09:14:31.162324 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 09:14:31.894547 sshd[4254]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:31.900553 systemd[1]: sshd@26-188.245.82.140:22-139.178.89.65:44912.service: Deactivated successfully. Dec 13 09:14:31.903329 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 09:14:31.904129 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Dec 13 09:14:31.905407 systemd-logind[1451]: Removed session 18. Dec 13 09:14:37.082061 systemd[1]: Started sshd@27-188.245.82.140:22-139.178.89.65:44916.service - OpenSSH per-connection server daemon (139.178.89.65:44916). Dec 13 09:14:38.063940 sshd[4267]: Accepted publickey for core from 139.178.89.65 port 44916 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:38.066284 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:38.076708 systemd-logind[1451]: New session 19 of user core. Dec 13 09:14:38.078130 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 09:14:38.832328 sshd[4267]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:38.839876 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Dec 13 09:14:38.842872 systemd[1]: sshd@27-188.245.82.140:22-139.178.89.65:44916.service: Deactivated successfully. Dec 13 09:14:38.848822 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 09:14:38.852413 systemd-logind[1451]: Removed session 19. Dec 13 09:14:39.012711 systemd[1]: Started sshd@28-188.245.82.140:22-139.178.89.65:55070.service - OpenSSH per-connection server daemon (139.178.89.65:55070). Dec 13 09:14:40.002191 sshd[4282]: Accepted publickey for core from 139.178.89.65 port 55070 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:40.004112 sshd[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:40.010257 systemd-logind[1451]: New session 20 of user core. Dec 13 09:14:40.016782 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 09:14:43.110369 containerd[1467]: time="2024-12-13T09:14:43.110264064Z" level=info msg="StopContainer for \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\" with timeout 30 (s)" Dec 13 09:14:43.114080 containerd[1467]: time="2024-12-13T09:14:43.114044349Z" level=info msg="Stop container \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\" with signal terminated" Dec 13 09:14:43.136682 systemd[1]: cri-containerd-f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec.scope: Deactivated successfully. Dec 13 09:14:43.141932 containerd[1467]: time="2024-12-13T09:14:43.141810158Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 09:14:43.151376 containerd[1467]: time="2024-12-13T09:14:43.151138869Z" level=info msg="StopContainer for \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\" with timeout 2 (s)" Dec 13 09:14:43.151885 containerd[1467]: time="2024-12-13T09:14:43.151838797Z" level=info msg="Stop container \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\" with signal terminated" Dec 13 09:14:43.163316 systemd-networkd[1375]: lxc_health: Link DOWN Dec 13 09:14:43.163323 systemd-networkd[1375]: lxc_health: Lost carrier Dec 13 09:14:43.180840 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec-rootfs.mount: Deactivated successfully. Dec 13 09:14:43.191882 containerd[1467]: time="2024-12-13T09:14:43.191821831Z" level=info msg="shim disconnected" id=f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec namespace=k8s.io Dec 13 09:14:43.192119 containerd[1467]: time="2024-12-13T09:14:43.192094195Z" level=warning msg="cleaning up after shim disconnected" id=f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec namespace=k8s.io Dec 13 09:14:43.192238 containerd[1467]: time="2024-12-13T09:14:43.192217396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:14:43.193542 systemd[1]: cri-containerd-692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01.scope: Deactivated successfully. Dec 13 09:14:43.193882 systemd[1]: cri-containerd-692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01.scope: Consumed 7.804s CPU time. Dec 13 09:14:43.216417 containerd[1467]: time="2024-12-13T09:14:43.216029078Z" level=info msg="StopContainer for \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\" returns successfully" Dec 13 09:14:43.217249 containerd[1467]: time="2024-12-13T09:14:43.217112971Z" level=info msg="StopPodSandbox for \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\"" Dec 13 09:14:43.217249 containerd[1467]: time="2024-12-13T09:14:43.217152812Z" level=info msg="Container to stop \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:14:43.220853 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09-shm.mount: Deactivated successfully. Dec 13 09:14:43.231951 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01-rootfs.mount: Deactivated successfully. Dec 13 09:14:43.236382 systemd[1]: cri-containerd-667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09.scope: Deactivated successfully. Dec 13 09:14:43.246785 containerd[1467]: time="2024-12-13T09:14:43.246600641Z" level=info msg="shim disconnected" id=692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01 namespace=k8s.io Dec 13 09:14:43.246785 containerd[1467]: time="2024-12-13T09:14:43.246765243Z" level=warning msg="cleaning up after shim disconnected" id=692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01 namespace=k8s.io Dec 13 09:14:43.246785 containerd[1467]: time="2024-12-13T09:14:43.246777843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:14:43.270681 containerd[1467]: time="2024-12-13T09:14:43.270510124Z" level=info msg="StopContainer for \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\" returns successfully" Dec 13 09:14:43.273180 containerd[1467]: time="2024-12-13T09:14:43.271797460Z" level=info msg="StopPodSandbox for \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\"" Dec 13 09:14:43.273180 containerd[1467]: time="2024-12-13T09:14:43.271855420Z" level=info msg="Container to stop \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:14:43.273180 containerd[1467]: time="2024-12-13T09:14:43.271870660Z" level=info msg="Container to stop \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:14:43.273180 containerd[1467]: time="2024-12-13T09:14:43.271882781Z" level=info msg="Container to stop \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:14:43.273180 containerd[1467]: time="2024-12-13T09:14:43.271894901Z" level=info msg="Container to stop \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:14:43.273180 containerd[1467]: time="2024-12-13T09:14:43.271928141Z" level=info msg="Container to stop \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 09:14:43.271072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09-rootfs.mount: Deactivated successfully. Dec 13 09:14:43.276674 containerd[1467]: time="2024-12-13T09:14:43.276428194Z" level=info msg="shim disconnected" id=667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09 namespace=k8s.io Dec 13 09:14:43.276674 containerd[1467]: time="2024-12-13T09:14:43.276486355Z" level=warning msg="cleaning up after shim disconnected" id=667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09 namespace=k8s.io Dec 13 09:14:43.276674 containerd[1467]: time="2024-12-13T09:14:43.276499115Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:14:43.284376 systemd[1]: cri-containerd-5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852.scope: Deactivated successfully. Dec 13 09:14:43.293374 containerd[1467]: time="2024-12-13T09:14:43.293323235Z" level=warning msg="cleanup warnings time=\"2024-12-13T09:14:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 09:14:43.294534 containerd[1467]: time="2024-12-13T09:14:43.294378847Z" level=info msg="TearDown network for sandbox \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\" successfully" Dec 13 09:14:43.294534 containerd[1467]: time="2024-12-13T09:14:43.294408448Z" level=info msg="StopPodSandbox for \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\" returns successfully" Dec 13 09:14:43.325558 containerd[1467]: time="2024-12-13T09:14:43.325196213Z" level=info msg="shim disconnected" id=5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852 namespace=k8s.io Dec 13 09:14:43.325558 containerd[1467]: time="2024-12-13T09:14:43.325393335Z" level=warning msg="cleaning up after shim disconnected" id=5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852 namespace=k8s.io Dec 13 09:14:43.325558 containerd[1467]: time="2024-12-13T09:14:43.325403095Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:14:43.341345 containerd[1467]: time="2024-12-13T09:14:43.341201522Z" level=info msg="TearDown network for sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" successfully" Dec 13 09:14:43.341345 containerd[1467]: time="2024-12-13T09:14:43.341241203Z" level=info msg="StopPodSandbox for \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" returns successfully" Dec 13 09:14:43.348488 kubelet[2668]: I1213 09:14:43.347506 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-649xr\" (UniqueName: \"kubernetes.io/projected/32876adc-22ad-418a-a108-76d3857db0dc-kube-api-access-649xr\") pod \"32876adc-22ad-418a-a108-76d3857db0dc\" (UID: \"32876adc-22ad-418a-a108-76d3857db0dc\") " Dec 13 09:14:43.348488 kubelet[2668]: I1213 09:14:43.347559 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32876adc-22ad-418a-a108-76d3857db0dc-cilium-config-path\") pod \"32876adc-22ad-418a-a108-76d3857db0dc\" (UID: \"32876adc-22ad-418a-a108-76d3857db0dc\") " Dec 13 09:14:43.350094 kubelet[2668]: I1213 09:14:43.350051 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32876adc-22ad-418a-a108-76d3857db0dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32876adc-22ad-418a-a108-76d3857db0dc" (UID: "32876adc-22ad-418a-a108-76d3857db0dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:14:43.353238 kubelet[2668]: I1213 09:14:43.353074 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32876adc-22ad-418a-a108-76d3857db0dc-kube-api-access-649xr" (OuterVolumeSpecName: "kube-api-access-649xr") pod "32876adc-22ad-418a-a108-76d3857db0dc" (UID: "32876adc-22ad-418a-a108-76d3857db0dc"). InnerVolumeSpecName "kube-api-access-649xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:14:43.449000 kubelet[2668]: I1213 09:14:43.448302 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-etc-cni-netd\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449000 kubelet[2668]: I1213 09:14:43.448358 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-lib-modules\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449000 kubelet[2668]: I1213 09:14:43.448392 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j54jt\" (UniqueName: \"kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-kube-api-access-j54jt\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449000 kubelet[2668]: I1213 09:14:43.448415 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-bpf-maps\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449000 kubelet[2668]: I1213 09:14:43.448440 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-run\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449000 kubelet[2668]: I1213 09:14:43.448463 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-config-path\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449269 kubelet[2668]: I1213 09:14:43.448486 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-hubble-tls\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449269 kubelet[2668]: I1213 09:14:43.448506 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-cgroup\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449269 kubelet[2668]: I1213 09:14:43.448523 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-host-proc-sys-net\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449269 kubelet[2668]: I1213 09:14:43.448473 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.449269 kubelet[2668]: I1213 09:14:43.448589 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-clustermesh-secrets\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449269 kubelet[2668]: I1213 09:14:43.448664 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cni-path\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449400 kubelet[2668]: I1213 09:14:43.448687 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-hostproc\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449400 kubelet[2668]: I1213 09:14:43.448702 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-host-proc-sys-kernel\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449400 kubelet[2668]: I1213 09:14:43.448716 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-xtables-lock\") pod \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\" (UID: \"883ff6a8-3f4a-4057-8ccc-9e0d3073e33d\") " Dec 13 09:14:43.449400 kubelet[2668]: I1213 09:14:43.448750 2668 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-etc-cni-netd\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.449400 kubelet[2668]: I1213 09:14:43.448762 2668 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-649xr\" (UniqueName: \"kubernetes.io/projected/32876adc-22ad-418a-a108-76d3857db0dc-kube-api-access-649xr\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.449400 kubelet[2668]: I1213 09:14:43.448771 2668 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32876adc-22ad-418a-a108-76d3857db0dc-cilium-config-path\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.449523 kubelet[2668]: I1213 09:14:43.448803 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.449523 kubelet[2668]: I1213 09:14:43.448831 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.449523 kubelet[2668]: I1213 09:14:43.448884 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.451697 kubelet[2668]: I1213 09:14:43.451651 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 09:14:43.453450 kubelet[2668]: I1213 09:14:43.453029 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.453450 kubelet[2668]: I1213 09:14:43.453061 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.453450 kubelet[2668]: I1213 09:14:43.453092 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.453450 kubelet[2668]: I1213 09:14:43.453190 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-hostproc" (OuterVolumeSpecName: "hostproc") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.453450 kubelet[2668]: I1213 09:14:43.453240 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cni-path" (OuterVolumeSpecName: "cni-path") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.453666 kubelet[2668]: I1213 09:14:43.453278 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 09:14:43.454013 kubelet[2668]: I1213 09:14:43.453968 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-kube-api-access-j54jt" (OuterVolumeSpecName: "kube-api-access-j54jt") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "kube-api-access-j54jt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:14:43.457788 kubelet[2668]: I1213 09:14:43.457727 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 09:14:43.458727 kubelet[2668]: I1213 09:14:43.458691 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" (UID: "883ff6a8-3f4a-4057-8ccc-9e0d3073e33d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 09:14:43.549424 kubelet[2668]: I1213 09:14:43.549198 2668 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-config-path\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550022 kubelet[2668]: I1213 09:14:43.549739 2668 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-hubble-tls\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550022 kubelet[2668]: I1213 09:14:43.549832 2668 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-cgroup\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550022 kubelet[2668]: I1213 09:14:43.549849 2668 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-host-proc-sys-net\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550022 kubelet[2668]: I1213 09:14:43.549872 2668 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-clustermesh-secrets\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550022 kubelet[2668]: I1213 09:14:43.549888 2668 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cni-path\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550022 kubelet[2668]: I1213 09:14:43.549901 2668 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-hostproc\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550022 kubelet[2668]: I1213 09:14:43.549959 2668 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-host-proc-sys-kernel\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550022 kubelet[2668]: I1213 09:14:43.549976 2668 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-xtables-lock\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550527 kubelet[2668]: I1213 09:14:43.549990 2668 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-lib-modules\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550527 kubelet[2668]: I1213 09:14:43.550003 2668 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j54jt\" (UniqueName: \"kubernetes.io/projected/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-kube-api-access-j54jt\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550527 kubelet[2668]: I1213 09:14:43.550472 2668 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-bpf-maps\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.550527 kubelet[2668]: I1213 09:14:43.550496 2668 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d-cilium-run\") on node \"ci-4081-2-1-a-d14f804a70\" DevicePath \"\"" Dec 13 09:14:43.760746 kubelet[2668]: I1213 09:14:43.759281 2668 scope.go:117] "RemoveContainer" containerID="692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01" Dec 13 09:14:43.764537 containerd[1467]: time="2024-12-13T09:14:43.764130376Z" level=info msg="RemoveContainer for \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\"" Dec 13 09:14:43.765808 systemd[1]: Removed slice kubepods-burstable-pod883ff6a8_3f4a_4057_8ccc_9e0d3073e33d.slice - libcontainer container kubepods-burstable-pod883ff6a8_3f4a_4057_8ccc_9e0d3073e33d.slice. Dec 13 09:14:43.765903 systemd[1]: kubepods-burstable-pod883ff6a8_3f4a_4057_8ccc_9e0d3073e33d.slice: Consumed 7.891s CPU time. Dec 13 09:14:43.773478 containerd[1467]: time="2024-12-13T09:14:43.773262325Z" level=info msg="RemoveContainer for \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\" returns successfully" Dec 13 09:14:43.775073 kubelet[2668]: I1213 09:14:43.775027 2668 scope.go:117] "RemoveContainer" containerID="ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280" Dec 13 09:14:43.778374 systemd[1]: Removed slice kubepods-besteffort-pod32876adc_22ad_418a_a108_76d3857db0dc.slice - libcontainer container kubepods-besteffort-pod32876adc_22ad_418a_a108_76d3857db0dc.slice. Dec 13 09:14:43.779660 containerd[1467]: time="2024-12-13T09:14:43.778666309Z" level=info msg="RemoveContainer for \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\"" Dec 13 09:14:43.785587 containerd[1467]: time="2024-12-13T09:14:43.785537190Z" level=info msg="RemoveContainer for \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\" returns successfully" Dec 13 09:14:43.785992 kubelet[2668]: I1213 09:14:43.785967 2668 scope.go:117] "RemoveContainer" containerID="f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843" Dec 13 09:14:43.788266 containerd[1467]: time="2024-12-13T09:14:43.787706696Z" level=info msg="RemoveContainer for \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\"" Dec 13 09:14:43.793644 containerd[1467]: time="2024-12-13T09:14:43.793586406Z" level=info msg="RemoveContainer for \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\" returns successfully" Dec 13 09:14:43.794131 kubelet[2668]: I1213 09:14:43.794108 2668 scope.go:117] "RemoveContainer" containerID="62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8" Dec 13 09:14:43.797580 containerd[1467]: time="2024-12-13T09:14:43.797540933Z" level=info msg="RemoveContainer for \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\"" Dec 13 09:14:43.800916 containerd[1467]: time="2024-12-13T09:14:43.800868452Z" level=info msg="RemoveContainer for \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\" returns successfully" Dec 13 09:14:43.801833 kubelet[2668]: I1213 09:14:43.801119 2668 scope.go:117] "RemoveContainer" containerID="d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca" Dec 13 09:14:43.804467 containerd[1467]: time="2024-12-13T09:14:43.804089170Z" level=info msg="RemoveContainer for \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\"" Dec 13 09:14:43.808347 containerd[1467]: time="2024-12-13T09:14:43.808305660Z" level=info msg="RemoveContainer for \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\" returns successfully" Dec 13 09:14:43.808699 kubelet[2668]: I1213 09:14:43.808677 2668 scope.go:117] "RemoveContainer" containerID="692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01" Dec 13 09:14:43.810071 containerd[1467]: time="2024-12-13T09:14:43.810008760Z" level=error msg="ContainerStatus for \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\": not found" Dec 13 09:14:43.810394 kubelet[2668]: I1213 09:14:43.810359 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" path="/var/lib/kubelet/pods/883ff6a8-3f4a-4057-8ccc-9e0d3073e33d/volumes" Dec 13 09:14:43.810949 kubelet[2668]: E1213 09:14:43.810890 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\": not found" containerID="692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01" Dec 13 09:14:43.812137 kubelet[2668]: I1213 09:14:43.811997 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01"} err="failed to get container status \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\": rpc error: code = NotFound desc = an error occurred when try to find container \"692af5cf2993ab2242ee75e382324122807d24f877855cb810cca3936d646c01\": not found" Dec 13 09:14:43.812327 kubelet[2668]: I1213 09:14:43.812231 2668 scope.go:117] "RemoveContainer" containerID="ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280" Dec 13 09:14:43.813128 containerd[1467]: time="2024-12-13T09:14:43.813036876Z" level=error msg="ContainerStatus for \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\": not found" Dec 13 09:14:43.813303 kubelet[2668]: E1213 09:14:43.813208 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\": not found" containerID="ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280" Dec 13 09:14:43.813303 kubelet[2668]: I1213 09:14:43.813234 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280"} err="failed to get container status \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca68cd78ecc1270a19e191fcca2b064c93a1bbae0d9894d81bceab00f53a6280\": not found" Dec 13 09:14:43.813599 kubelet[2668]: I1213 09:14:43.813514 2668 scope.go:117] "RemoveContainer" containerID="f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843" Dec 13 09:14:43.813959 containerd[1467]: time="2024-12-13T09:14:43.813900406Z" level=error msg="ContainerStatus for \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\": not found" Dec 13 09:14:43.815585 kubelet[2668]: E1213 09:14:43.815041 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\": not found" containerID="f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843" Dec 13 09:14:43.815585 kubelet[2668]: I1213 09:14:43.815075 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843"} err="failed to get container status \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5d1df1dbd4ed0dfce0a809bbc8f30aca45bb5dddff2759ae1333ddcaadb0843\": not found" Dec 13 09:14:43.815585 kubelet[2668]: I1213 09:14:43.815093 2668 scope.go:117] "RemoveContainer" containerID="62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8" Dec 13 09:14:43.815972 containerd[1467]: time="2024-12-13T09:14:43.815939551Z" level=error msg="ContainerStatus for \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\": not found" Dec 13 09:14:43.816490 kubelet[2668]: E1213 09:14:43.816447 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\": not found" containerID="62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8" Dec 13 09:14:43.816605 kubelet[2668]: I1213 09:14:43.816495 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8"} err="failed to get container status \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"62e9d7a71281ec67264125b2b7c0be2d814128ac3f07670a7f134800d94890d8\": not found" Dec 13 09:14:43.816605 kubelet[2668]: I1213 09:14:43.816527 2668 scope.go:117] "RemoveContainer" containerID="d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca" Dec 13 09:14:43.816605 kubelet[2668]: E1213 09:14:43.816996 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\": not found" containerID="d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca" Dec 13 09:14:43.816605 kubelet[2668]: I1213 09:14:43.817021 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca"} err="failed to get container status \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\": rpc error: code = NotFound desc = an error occurred when try to find container \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\": not found" Dec 13 09:14:43.816605 kubelet[2668]: I1213 09:14:43.817038 2668 scope.go:117] "RemoveContainer" containerID="f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec" Dec 13 09:14:43.817292 containerd[1467]: time="2024-12-13T09:14:43.816795881Z" level=error msg="ContainerStatus for \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d83c776c7535d42c627c6b78e65150f4fdc323748b6f5cde8bb5164f073abcca\": not found" Dec 13 09:14:43.818954 containerd[1467]: time="2024-12-13T09:14:43.818772344Z" level=info msg="RemoveContainer for \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\"" Dec 13 09:14:43.821814 containerd[1467]: time="2024-12-13T09:14:43.821728859Z" level=info msg="RemoveContainer for \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\" returns successfully" Dec 13 09:14:43.822026 kubelet[2668]: I1213 09:14:43.821993 2668 scope.go:117] "RemoveContainer" containerID="f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec" Dec 13 09:14:43.822282 containerd[1467]: time="2024-12-13T09:14:43.822237905Z" level=error msg="ContainerStatus for \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\": not found" Dec 13 09:14:43.822471 kubelet[2668]: E1213 09:14:43.822427 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\": not found" containerID="f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec" Dec 13 09:14:43.822646 kubelet[2668]: I1213 09:14:43.822454 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec"} err="failed to get container status \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1844cab53dbc8cfaee17610444f529eeaa3ed5da4dcd30f65a4d3b5beb358ec\": not found" Dec 13 09:14:44.102394 systemd[1]: var-lib-kubelet-pods-32876adc\x2d22ad\x2d418a\x2da108\x2d76d3857db0dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d649xr.mount: Deactivated successfully. Dec 13 09:14:44.102572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852-rootfs.mount: Deactivated successfully. Dec 13 09:14:44.102676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852-shm.mount: Deactivated successfully. Dec 13 09:14:44.102778 systemd[1]: var-lib-kubelet-pods-883ff6a8\x2d3f4a\x2d4057\x2d8ccc\x2d9e0d3073e33d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj54jt.mount: Deactivated successfully. Dec 13 09:14:44.102880 systemd[1]: var-lib-kubelet-pods-883ff6a8\x2d3f4a\x2d4057\x2d8ccc\x2d9e0d3073e33d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 09:14:44.103002 systemd[1]: var-lib-kubelet-pods-883ff6a8\x2d3f4a\x2d4057\x2d8ccc\x2d9e0d3073e33d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 09:14:44.968248 kubelet[2668]: E1213 09:14:44.968127 2668 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 09:14:45.172353 sshd[4282]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:45.176766 systemd[1]: sshd@28-188.245.82.140:22-139.178.89.65:55070.service: Deactivated successfully. Dec 13 09:14:45.179387 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 09:14:45.179756 systemd[1]: session-20.scope: Consumed 1.907s CPU time. Dec 13 09:14:45.182048 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Dec 13 09:14:45.183720 systemd-logind[1451]: Removed session 20. Dec 13 09:14:45.347590 systemd[1]: Started sshd@29-188.245.82.140:22-139.178.89.65:55072.service - OpenSSH per-connection server daemon (139.178.89.65:55072). Dec 13 09:14:45.810073 kubelet[2668]: I1213 09:14:45.810014 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32876adc-22ad-418a-a108-76d3857db0dc" path="/var/lib/kubelet/pods/32876adc-22ad-418a-a108-76d3857db0dc/volumes" Dec 13 09:14:46.330106 sshd[4444]: Accepted publickey for core from 139.178.89.65 port 55072 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:46.332508 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:46.338346 systemd-logind[1451]: New session 21 of user core. Dec 13 09:14:46.347146 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 09:14:47.133895 kubelet[2668]: I1213 09:14:47.133843 2668 setters.go:600] "Node became not ready" node="ci-4081-2-1-a-d14f804a70" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T09:14:47Z","lastTransitionTime":"2024-12-13T09:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 09:14:47.808837 kubelet[2668]: E1213 09:14:47.808465 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bcnkn" podUID="181b7cfa-b1d0-4c79-b9c2-88d783cddccc" Dec 13 09:14:48.284819 kubelet[2668]: E1213 09:14:48.282627 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" containerName="mount-cgroup" Dec 13 09:14:48.284819 kubelet[2668]: E1213 09:14:48.282668 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" containerName="mount-bpf-fs" Dec 13 09:14:48.284819 kubelet[2668]: E1213 09:14:48.282677 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" containerName="apply-sysctl-overwrites" Dec 13 09:14:48.284819 kubelet[2668]: E1213 09:14:48.282686 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" containerName="clean-cilium-state" Dec 13 09:14:48.284819 kubelet[2668]: E1213 09:14:48.282693 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" containerName="cilium-agent" Dec 13 09:14:48.284819 kubelet[2668]: E1213 09:14:48.282699 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="32876adc-22ad-418a-a108-76d3857db0dc" containerName="cilium-operator" Dec 13 09:14:48.284819 kubelet[2668]: I1213 09:14:48.282725 2668 memory_manager.go:354] "RemoveStaleState removing state" podUID="883ff6a8-3f4a-4057-8ccc-9e0d3073e33d" containerName="cilium-agent" Dec 13 09:14:48.284819 kubelet[2668]: I1213 09:14:48.282736 2668 memory_manager.go:354] "RemoveStaleState removing state" podUID="32876adc-22ad-418a-a108-76d3857db0dc" containerName="cilium-operator" Dec 13 09:14:48.295033 systemd[1]: Created slice kubepods-burstable-pod5e210af9_3c7b_402e_94a6_b2a59db3e278.slice - libcontainer container kubepods-burstable-pod5e210af9_3c7b_402e_94a6_b2a59db3e278.slice. Dec 13 09:14:48.384039 kubelet[2668]: I1213 09:14:48.383248 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-host-proc-sys-net\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384039 kubelet[2668]: I1213 09:14:48.383332 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-host-proc-sys-kernel\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384039 kubelet[2668]: I1213 09:14:48.383369 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq42w\" (UniqueName: \"kubernetes.io/projected/5e210af9-3c7b-402e-94a6-b2a59db3e278-kube-api-access-vq42w\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384039 kubelet[2668]: I1213 09:14:48.383405 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-hostproc\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384039 kubelet[2668]: I1213 09:14:48.383436 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-lib-modules\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384039 kubelet[2668]: I1213 09:14:48.383468 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-etc-cni-netd\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384567 kubelet[2668]: I1213 09:14:48.383499 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-cilium-run\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384567 kubelet[2668]: I1213 09:14:48.383533 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e210af9-3c7b-402e-94a6-b2a59db3e278-cilium-config-path\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384567 kubelet[2668]: I1213 09:14:48.383564 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5e210af9-3c7b-402e-94a6-b2a59db3e278-cilium-ipsec-secrets\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384567 kubelet[2668]: I1213 09:14:48.383601 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-xtables-lock\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384567 kubelet[2668]: I1213 09:14:48.383634 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5e210af9-3c7b-402e-94a6-b2a59db3e278-clustermesh-secrets\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384567 kubelet[2668]: I1213 09:14:48.383666 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-bpf-maps\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384869 kubelet[2668]: I1213 09:14:48.383700 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-cni-path\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384869 kubelet[2668]: I1213 09:14:48.383735 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5e210af9-3c7b-402e-94a6-b2a59db3e278-hubble-tls\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.384869 kubelet[2668]: I1213 09:14:48.383765 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5e210af9-3c7b-402e-94a6-b2a59db3e278-cilium-cgroup\") pod \"cilium-swxrk\" (UID: \"5e210af9-3c7b-402e-94a6-b2a59db3e278\") " pod="kube-system/cilium-swxrk" Dec 13 09:14:48.454885 sshd[4444]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:48.464225 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Dec 13 09:14:48.464413 systemd[1]: sshd@29-188.245.82.140:22-139.178.89.65:55072.service: Deactivated successfully. Dec 13 09:14:48.468302 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 09:14:48.468625 systemd[1]: session-21.scope: Consumed 1.323s CPU time. Dec 13 09:14:48.469735 systemd-logind[1451]: Removed session 21. Dec 13 09:14:48.600747 containerd[1467]: time="2024-12-13T09:14:48.600074252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swxrk,Uid:5e210af9-3c7b-402e-94a6-b2a59db3e278,Namespace:kube-system,Attempt:0,}" Dec 13 09:14:48.625150 containerd[1467]: time="2024-12-13T09:14:48.625018141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 09:14:48.625150 containerd[1467]: time="2024-12-13T09:14:48.625067982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 09:14:48.625150 containerd[1467]: time="2024-12-13T09:14:48.625079022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:14:48.626017 containerd[1467]: time="2024-12-13T09:14:48.625201583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 09:14:48.628627 systemd[1]: Started sshd@30-188.245.82.140:22-139.178.89.65:50928.service - OpenSSH per-connection server daemon (139.178.89.65:50928). Dec 13 09:14:48.648288 systemd[1]: Started cri-containerd-ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190.scope - libcontainer container ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190. Dec 13 09:14:48.679121 containerd[1467]: time="2024-12-13T09:14:48.678751285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-swxrk,Uid:5e210af9-3c7b-402e-94a6-b2a59db3e278,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\"" Dec 13 09:14:48.686416 containerd[1467]: time="2024-12-13T09:14:48.686080490Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 09:14:48.706384 containerd[1467]: time="2024-12-13T09:14:48.706333605Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e058914122d737381e0001c1ef8b84df7b22a4d17ef78a8f7644c7f049de49bf\"" Dec 13 09:14:48.707949 containerd[1467]: time="2024-12-13T09:14:48.707311936Z" level=info msg="StartContainer for \"e058914122d737381e0001c1ef8b84df7b22a4d17ef78a8f7644c7f049de49bf\"" Dec 13 09:14:48.739159 systemd[1]: Started cri-containerd-e058914122d737381e0001c1ef8b84df7b22a4d17ef78a8f7644c7f049de49bf.scope - libcontainer container e058914122d737381e0001c1ef8b84df7b22a4d17ef78a8f7644c7f049de49bf. Dec 13 09:14:48.771309 containerd[1467]: time="2024-12-13T09:14:48.771259518Z" level=info msg="StartContainer for \"e058914122d737381e0001c1ef8b84df7b22a4d17ef78a8f7644c7f049de49bf\" returns successfully" Dec 13 09:14:48.789089 systemd[1]: cri-containerd-e058914122d737381e0001c1ef8b84df7b22a4d17ef78a8f7644c7f049de49bf.scope: Deactivated successfully. Dec 13 09:14:48.806126 kubelet[2668]: E1213 09:14:48.805708 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-xdj5x" podUID="d097575c-2861-4314-b20e-86cc64b0f14f" Dec 13 09:14:48.837124 containerd[1467]: time="2024-12-13T09:14:48.836884559Z" level=info msg="shim disconnected" id=e058914122d737381e0001c1ef8b84df7b22a4d17ef78a8f7644c7f049de49bf namespace=k8s.io Dec 13 09:14:48.837124 containerd[1467]: time="2024-12-13T09:14:48.836995641Z" level=warning msg="cleaning up after shim disconnected" id=e058914122d737381e0001c1ef8b84df7b22a4d17ef78a8f7644c7f049de49bf namespace=k8s.io Dec 13 09:14:48.837124 containerd[1467]: time="2024-12-13T09:14:48.837009201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:14:49.622369 sshd[4474]: Accepted publickey for core from 139.178.89.65 port 50928 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:49.624643 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:49.630198 systemd-logind[1451]: New session 22 of user core. Dec 13 09:14:49.639804 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 09:14:49.792999 containerd[1467]: time="2024-12-13T09:14:49.792940815Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 09:14:49.805969 kubelet[2668]: E1213 09:14:49.805894 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bcnkn" podUID="181b7cfa-b1d0-4c79-b9c2-88d783cddccc" Dec 13 09:14:49.812666 containerd[1467]: time="2024-12-13T09:14:49.812274318Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed\"" Dec 13 09:14:49.813046 containerd[1467]: time="2024-12-13T09:14:49.813015767Z" level=info msg="StartContainer for \"09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed\"" Dec 13 09:14:49.858358 systemd[1]: Started cri-containerd-09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed.scope - libcontainer container 09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed. Dec 13 09:14:49.890876 containerd[1467]: time="2024-12-13T09:14:49.890819106Z" level=info msg="StartContainer for \"09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed\" returns successfully" Dec 13 09:14:49.901638 systemd[1]: cri-containerd-09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed.scope: Deactivated successfully. Dec 13 09:14:49.929221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed-rootfs.mount: Deactivated successfully. Dec 13 09:14:49.932519 containerd[1467]: time="2024-12-13T09:14:49.932453987Z" level=info msg="shim disconnected" id=09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed namespace=k8s.io Dec 13 09:14:49.932519 containerd[1467]: time="2024-12-13T09:14:49.932507827Z" level=warning msg="cleaning up after shim disconnected" id=09d28c5aa25c33c4df4a12767f0bb8f1072ed3de439d784432a86647b321e3ed namespace=k8s.io Dec 13 09:14:49.932519 containerd[1467]: time="2024-12-13T09:14:49.932516027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:14:49.969855 kubelet[2668]: E1213 09:14:49.969757 2668 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 09:14:50.305362 sshd[4474]: pam_unix(sshd:session): session closed for user core Dec 13 09:14:50.309484 systemd[1]: sshd@30-188.245.82.140:22-139.178.89.65:50928.service: Deactivated successfully. Dec 13 09:14:50.311713 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 09:14:50.314405 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Dec 13 09:14:50.315271 systemd-logind[1451]: Removed session 22. Dec 13 09:14:50.483754 systemd[1]: Started sshd@31-188.245.82.140:22-139.178.89.65:50936.service - OpenSSH per-connection server daemon (139.178.89.65:50936). Dec 13 09:14:50.793745 containerd[1467]: time="2024-12-13T09:14:50.793696420Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 09:14:50.806305 kubelet[2668]: E1213 09:14:50.806260 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-xdj5x" podUID="d097575c-2861-4314-b20e-86cc64b0f14f" Dec 13 09:14:50.823212 containerd[1467]: time="2024-12-13T09:14:50.823135399Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc\"" Dec 13 09:14:50.825015 containerd[1467]: time="2024-12-13T09:14:50.824972940Z" level=info msg="StartContainer for \"419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc\"" Dec 13 09:14:50.859218 systemd[1]: Started cri-containerd-419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc.scope - libcontainer container 419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc. Dec 13 09:14:50.898501 containerd[1467]: time="2024-12-13T09:14:50.898451985Z" level=info msg="StartContainer for \"419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc\" returns successfully" Dec 13 09:14:50.902211 systemd[1]: cri-containerd-419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc.scope: Deactivated successfully. Dec 13 09:14:50.930797 containerd[1467]: time="2024-12-13T09:14:50.930690876Z" level=info msg="shim disconnected" id=419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc namespace=k8s.io Dec 13 09:14:50.931107 containerd[1467]: time="2024-12-13T09:14:50.931019200Z" level=warning msg="cleaning up after shim disconnected" id=419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc namespace=k8s.io Dec 13 09:14:50.931107 containerd[1467]: time="2024-12-13T09:14:50.931056640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:14:51.468411 sshd[4638]: Accepted publickey for core from 139.178.89.65 port 50936 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:14:51.470893 sshd[4638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:14:51.475602 systemd-logind[1451]: New session 23 of user core. Dec 13 09:14:51.482256 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 09:14:51.801036 containerd[1467]: time="2024-12-13T09:14:51.800315123Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 09:14:51.806855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-419c7e4b2f2f0ba89becbc927205693c579a0501911ed6a37de14d55e301bdfc-rootfs.mount: Deactivated successfully. Dec 13 09:14:51.809814 kubelet[2668]: E1213 09:14:51.809457 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bcnkn" podUID="181b7cfa-b1d0-4c79-b9c2-88d783cddccc" Dec 13 09:14:51.820026 containerd[1467]: time="2024-12-13T09:14:51.819806827Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7\"" Dec 13 09:14:51.823705 containerd[1467]: time="2024-12-13T09:14:51.820548475Z" level=info msg="StartContainer for \"2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7\"" Dec 13 09:14:51.877217 systemd[1]: Started cri-containerd-2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7.scope - libcontainer container 2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7. Dec 13 09:14:51.909068 systemd[1]: cri-containerd-2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7.scope: Deactivated successfully. Dec 13 09:14:51.914615 containerd[1467]: time="2024-12-13T09:14:51.914378710Z" level=info msg="StartContainer for \"2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7\" returns successfully" Dec 13 09:14:51.941251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7-rootfs.mount: Deactivated successfully. Dec 13 09:14:51.948000 containerd[1467]: time="2024-12-13T09:14:51.947847374Z" level=info msg="shim disconnected" id=2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7 namespace=k8s.io Dec 13 09:14:51.948000 containerd[1467]: time="2024-12-13T09:14:51.947954455Z" level=warning msg="cleaning up after shim disconnected" id=2be1bfdaad8aa40ff747e04d7d9e21e6f13f24ad64b85ba2f75b8172fcadedb7 namespace=k8s.io Dec 13 09:14:51.948000 containerd[1467]: time="2024-12-13T09:14:51.947969735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:14:52.234326 systemd[1]: Started sshd@32-188.245.82.140:22-95.85.47.10:50820.service - OpenSSH per-connection server daemon (95.85.47.10:50820). Dec 13 09:14:52.424870 sshd[4756]: Invalid user ovpn from 95.85.47.10 port 50820 Dec 13 09:14:52.445959 sshd[4756]: Received disconnect from 95.85.47.10 port 50820:11: Bye Bye [preauth] Dec 13 09:14:52.445959 sshd[4756]: Disconnected from invalid user ovpn 95.85.47.10 port 50820 [preauth] Dec 13 09:14:52.447815 systemd[1]: sshd@32-188.245.82.140:22-95.85.47.10:50820.service: Deactivated successfully. Dec 13 09:14:52.810492 kubelet[2668]: E1213 09:14:52.808454 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-xdj5x" podUID="d097575c-2861-4314-b20e-86cc64b0f14f" Dec 13 09:14:52.814958 containerd[1467]: time="2024-12-13T09:14:52.813003528Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 09:14:52.859944 containerd[1467]: time="2024-12-13T09:14:52.858694929Z" level=info msg="CreateContainer within sandbox \"ee6c43ace14ddf4d5e961cfde00acd59bc0e930682db66dd46f4e91334c55190\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"12dfa121f9785a3d57df7f97b4e666aeb1c5b9d44c8a19e54ba6015abaa32cbc\"" Dec 13 09:14:52.861792 containerd[1467]: time="2024-12-13T09:14:52.861740804Z" level=info msg="StartContainer for \"12dfa121f9785a3d57df7f97b4e666aeb1c5b9d44c8a19e54ba6015abaa32cbc\"" Dec 13 09:14:52.916137 systemd[1]: Started cri-containerd-12dfa121f9785a3d57df7f97b4e666aeb1c5b9d44c8a19e54ba6015abaa32cbc.scope - libcontainer container 12dfa121f9785a3d57df7f97b4e666aeb1c5b9d44c8a19e54ba6015abaa32cbc. Dec 13 09:14:52.965646 containerd[1467]: time="2024-12-13T09:14:52.965593509Z" level=info msg="StartContainer for \"12dfa121f9785a3d57df7f97b4e666aeb1c5b9d44c8a19e54ba6015abaa32cbc\" returns successfully" Dec 13 09:14:53.366026 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 09:14:53.806953 kubelet[2668]: E1213 09:14:53.806460 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-bcnkn" podUID="181b7cfa-b1d0-4c79-b9c2-88d783cddccc" Dec 13 09:14:53.836843 kubelet[2668]: I1213 09:14:53.836102 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-swxrk" podStartSLOduration=5.836086881 podStartE2EDuration="5.836086881s" podCreationTimestamp="2024-12-13 09:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 09:14:53.835792158 +0000 UTC m=+354.141988735" watchObservedRunningTime="2024-12-13 09:14:53.836086881 +0000 UTC m=+354.142283458" Dec 13 09:14:54.805321 kubelet[2668]: E1213 09:14:54.805237 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-xdj5x" podUID="d097575c-2861-4314-b20e-86cc64b0f14f" Dec 13 09:14:56.438844 systemd-networkd[1375]: lxc_health: Link UP Dec 13 09:14:56.449586 systemd-networkd[1375]: lxc_health: Gained carrier Dec 13 09:14:57.495281 systemd-networkd[1375]: lxc_health: Gained IPv6LL Dec 13 09:14:59.836133 containerd[1467]: time="2024-12-13T09:14:59.836006600Z" level=info msg="StopPodSandbox for \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\"" Dec 13 09:14:59.836133 containerd[1467]: time="2024-12-13T09:14:59.836134402Z" level=info msg="TearDown network for sandbox \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\" successfully" Dec 13 09:14:59.836133 containerd[1467]: time="2024-12-13T09:14:59.836147122Z" level=info msg="StopPodSandbox for \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\" returns successfully" Dec 13 09:14:59.839576 containerd[1467]: time="2024-12-13T09:14:59.836875290Z" level=info msg="RemovePodSandbox for \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\"" Dec 13 09:14:59.839576 containerd[1467]: time="2024-12-13T09:14:59.836967331Z" level=info msg="Forcibly stopping sandbox \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\"" Dec 13 09:14:59.839576 containerd[1467]: time="2024-12-13T09:14:59.837075932Z" level=info msg="TearDown network for sandbox \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\" successfully" Dec 13 09:14:59.842902 containerd[1467]: time="2024-12-13T09:14:59.842747995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:14:59.845044 containerd[1467]: time="2024-12-13T09:14:59.844970220Z" level=info msg="RemovePodSandbox \"667e78d72216c965395208f515a759deceaed55eb2198818b78db3df22665c09\" returns successfully" Dec 13 09:14:59.846900 containerd[1467]: time="2024-12-13T09:14:59.846347115Z" level=info msg="StopPodSandbox for \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\"" Dec 13 09:14:59.846900 containerd[1467]: time="2024-12-13T09:14:59.846506637Z" level=info msg="TearDown network for sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" successfully" Dec 13 09:14:59.846900 containerd[1467]: time="2024-12-13T09:14:59.846532997Z" level=info msg="StopPodSandbox for \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" returns successfully" Dec 13 09:14:59.848150 containerd[1467]: time="2024-12-13T09:14:59.847773651Z" level=info msg="RemovePodSandbox for \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\"" Dec 13 09:14:59.848150 containerd[1467]: time="2024-12-13T09:14:59.847822451Z" level=info msg="Forcibly stopping sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\"" Dec 13 09:14:59.848150 containerd[1467]: time="2024-12-13T09:14:59.847899172Z" level=info msg="TearDown network for sandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" successfully" Dec 13 09:14:59.853726 containerd[1467]: time="2024-12-13T09:14:59.852991549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 09:14:59.853726 containerd[1467]: time="2024-12-13T09:14:59.853105030Z" level=info msg="RemovePodSandbox \"5a754a8acc46a851902638ee580828e285f78c2fbdcd74a886bab74835656852\" returns successfully" Dec 13 09:15:03.010036 sshd[4638]: pam_unix(sshd:session): session closed for user core Dec 13 09:15:03.014353 systemd[1]: sshd@31-188.245.82.140:22-139.178.89.65:50936.service: Deactivated successfully. Dec 13 09:15:03.016411 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 09:15:03.019564 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Dec 13 09:15:03.021288 systemd-logind[1451]: Removed session 23. Dec 13 09:15:22.223621 systemd[1]: cri-containerd-2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61.scope: Deactivated successfully. Dec 13 09:15:22.223935 systemd[1]: cri-containerd-2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61.scope: Consumed 5.987s CPU time, 18.2M memory peak, 0B memory swap peak. Dec 13 09:15:22.247170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61-rootfs.mount: Deactivated successfully. Dec 13 09:15:22.255810 containerd[1467]: time="2024-12-13T09:15:22.255449620Z" level=info msg="shim disconnected" id=2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61 namespace=k8s.io Dec 13 09:15:22.255810 containerd[1467]: time="2024-12-13T09:15:22.255620662Z" level=warning msg="cleaning up after shim disconnected" id=2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61 namespace=k8s.io Dec 13 09:15:22.255810 containerd[1467]: time="2024-12-13T09:15:22.255631342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 09:15:22.267584 kubelet[2668]: E1213 09:15:22.267531 2668 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60854->10.0.0.2:2379: read: connection timed out" Dec 13 09:15:22.897106 kubelet[2668]: I1213 09:15:22.896889 2668 scope.go:117] "RemoveContainer" containerID="2c9bf6762aa5b1cb10fa23969a62cc27d908e6ac66a6fadbe0ed94e089032f61" Dec 13 09:15:22.899642 containerd[1467]: time="2024-12-13T09:15:22.899592906Z" level=info msg="CreateContainer within sandbox \"14f8728fe0dbf3a67e93c72d86814e4b2062dcf4ec6cdf382e4cbc65e60aa702\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 09:15:22.914525 containerd[1467]: time="2024-12-13T09:15:22.914300295Z" level=info msg="CreateContainer within sandbox \"14f8728fe0dbf3a67e93c72d86814e4b2062dcf4ec6cdf382e4cbc65e60aa702\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6c6b76082fd4bf6695ed126ae1d5a782b3d2c243393bda036f667cb8580a0976\"" Dec 13 09:15:22.916595 containerd[1467]: time="2024-12-13T09:15:22.915318786Z" level=info msg="StartContainer for \"6c6b76082fd4bf6695ed126ae1d5a782b3d2c243393bda036f667cb8580a0976\"" Dec 13 09:15:22.952354 systemd[1]: Started cri-containerd-6c6b76082fd4bf6695ed126ae1d5a782b3d2c243393bda036f667cb8580a0976.scope - libcontainer container 6c6b76082fd4bf6695ed126ae1d5a782b3d2c243393bda036f667cb8580a0976. Dec 13 09:15:22.990737 containerd[1467]: time="2024-12-13T09:15:22.990686549Z" level=info msg="StartContainer for \"6c6b76082fd4bf6695ed126ae1d5a782b3d2c243393bda036f667cb8580a0976\" returns successfully"