Jan 30 14:10:42.889630 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 14:10:42.889686 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:10:42.889698 kernel: KASLR enabled Jan 30 14:10:42.889704 kernel: efi: EFI v2.7 by EDK II Jan 30 14:10:42.889710 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Jan 30 14:10:42.889716 kernel: random: crng init done Jan 30 14:10:42.889723 kernel: ACPI: Early table checksum verification disabled Jan 30 14:10:42.889729 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 30 14:10:42.889735 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 30 14:10:42.889741 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889749 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889755 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889761 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889768 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889775 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889784 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889790 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889797 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:10:42.889803 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:10:42.889810 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 30 14:10:42.889816 kernel: NUMA: Failed to initialise from firmware Jan 30 14:10:42.889823 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 14:10:42.889830 kernel: NUMA: NODE_DATA [mem 0x13981d800-0x139822fff] Jan 30 14:10:42.889836 kernel: Zone ranges: Jan 30 14:10:42.889842 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 14:10:42.889849 kernel: DMA32 empty Jan 30 14:10:42.889857 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 30 14:10:42.889864 kernel: Movable zone start for each node Jan 30 14:10:42.889870 kernel: Early memory node ranges Jan 30 14:10:42.889876 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 30 14:10:42.889883 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 30 14:10:42.889889 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 30 14:10:42.889896 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 30 14:10:42.889903 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 30 14:10:42.889909 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 14:10:42.889916 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 30 14:10:42.889923 kernel: psci: probing for conduit method from ACPI. Jan 30 14:10:42.889931 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 14:10:42.889937 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:10:42.889944 kernel: psci: Trusted OS migration not required Jan 30 14:10:42.889954 kernel: psci: SMC Calling Convention v1.1 Jan 30 14:10:42.889961 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 14:10:42.889968 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:10:42.889976 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:10:42.889983 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:10:42.889989 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:10:42.889996 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:10:42.890003 kernel: CPU features: detected: Hardware dirty bit management Jan 30 14:10:42.890010 kernel: CPU features: detected: Spectre-v4 Jan 30 14:10:42.890016 kernel: CPU features: detected: Spectre-BHB Jan 30 14:10:42.890023 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 14:10:42.890030 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 14:10:42.890037 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 14:10:42.890044 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 14:10:42.890052 kernel: alternatives: applying boot alternatives Jan 30 14:10:42.890061 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:42.890068 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:10:42.890075 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:10:42.890082 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:10:42.890088 kernel: Fallback order for Node 0: 0 Jan 30 14:10:42.890096 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 30 14:10:42.890102 kernel: Policy zone: Normal Jan 30 14:10:42.890109 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:10:42.890116 kernel: software IO TLB: area num 2. Jan 30 14:10:42.890123 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 30 14:10:42.890131 kernel: Memory: 3881584K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 214416K reserved, 0K cma-reserved) Jan 30 14:10:42.890139 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:10:42.890145 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:10:42.890153 kernel: rcu: RCU event tracing is enabled. Jan 30 14:10:42.890161 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:10:42.890168 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:10:42.890175 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:10:42.890182 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:10:42.890189 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:10:42.890195 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:10:42.890202 kernel: GICv3: 256 SPIs implemented Jan 30 14:10:42.890210 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:10:42.890217 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:10:42.890224 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 14:10:42.890231 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 14:10:42.890238 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 14:10:42.890245 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 14:10:42.890252 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 14:10:42.890259 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 30 14:10:42.890266 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 30 14:10:42.890273 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:10:42.890280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:42.890289 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 14:10:42.890296 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 14:10:42.890303 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 14:10:42.890310 kernel: Console: colour dummy device 80x25 Jan 30 14:10:42.890317 kernel: ACPI: Core revision 20230628 Jan 30 14:10:42.890325 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 14:10:42.890332 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:10:42.890339 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:10:42.890346 kernel: landlock: Up and running. Jan 30 14:10:42.890353 kernel: SELinux: Initializing. Jan 30 14:10:42.890361 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:42.890368 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:10:42.890376 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:42.890383 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:10:42.890390 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:10:42.891001 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:10:42.891013 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 14:10:42.891020 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 14:10:42.891027 kernel: Remapping and enabling EFI services. Jan 30 14:10:42.891038 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:10:42.891046 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:10:42.891053 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 14:10:42.891060 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 30 14:10:42.891068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 14:10:42.891075 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 14:10:42.891082 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:10:42.891089 kernel: SMP: Total of 2 processors activated. Jan 30 14:10:42.891096 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:10:42.891105 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 14:10:42.891117 kernel: CPU features: detected: Common not Private translations Jan 30 14:10:42.891125 kernel: CPU features: detected: CRC32 instructions Jan 30 14:10:42.891140 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 14:10:42.891150 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 14:10:42.891158 kernel: CPU features: detected: LSE atomic instructions Jan 30 14:10:42.891165 kernel: CPU features: detected: Privileged Access Never Jan 30 14:10:42.891173 kernel: CPU features: detected: RAS Extension Support Jan 30 14:10:42.891181 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 14:10:42.891189 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:10:42.891198 kernel: alternatives: applying system-wide alternatives Jan 30 14:10:42.891205 kernel: devtmpfs: initialized Jan 30 14:10:42.891213 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:10:42.891221 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:10:42.891228 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:10:42.891236 kernel: SMBIOS 3.0.0 present. Jan 30 14:10:42.891243 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 30 14:10:42.891252 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:10:42.891260 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:10:42.891268 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:10:42.891276 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:10:42.891283 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:10:42.891291 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Jan 30 14:10:42.891298 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:10:42.891306 kernel: cpuidle: using governor menu Jan 30 14:10:42.891313 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:10:42.891322 kernel: ASID allocator initialised with 32768 entries Jan 30 14:10:42.891330 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:10:42.891337 kernel: Serial: AMBA PL011 UART driver Jan 30 14:10:42.891345 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 14:10:42.891352 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 14:10:42.891360 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:10:42.891367 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:10:42.891375 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:10:42.891382 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:10:42.891391 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:10:42.891424 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:10:42.891431 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:10:42.891439 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:10:42.891446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:10:42.891454 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:10:42.891461 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:10:42.891468 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:10:42.891476 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:10:42.891486 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:10:42.891493 kernel: ACPI: Interpreter enabled Jan 30 14:10:42.891501 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:10:42.891508 kernel: ACPI: MCFG table detected, 1 entries Jan 30 14:10:42.891516 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 14:10:42.891523 kernel: printk: console [ttyAMA0] enabled Jan 30 14:10:42.891531 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:10:42.891725 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:10:42.891811 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:10:42.891878 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:10:42.891944 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 14:10:42.892007 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 14:10:42.892017 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 14:10:42.892024 kernel: PCI host bridge to bus 0000:00 Jan 30 14:10:42.892096 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 14:10:42.892158 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 14:10:42.892222 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 14:10:42.892280 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:10:42.892365 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 14:10:42.892464 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 30 14:10:42.892539 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 30 14:10:42.892610 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 14:10:42.892701 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.892771 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 30 14:10:42.892853 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.892921 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 30 14:10:42.892994 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.893061 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 30 14:10:42.893142 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.893222 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 30 14:10:42.893300 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.893374 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 30 14:10:42.896577 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.896706 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 30 14:10:42.896800 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.896868 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 30 14:10:42.896942 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.897011 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 30 14:10:42.897086 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 14:10:42.897154 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 30 14:10:42.897237 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 30 14:10:42.897308 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 30 14:10:42.899436 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 14:10:42.899558 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 30 14:10:42.899635 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 14:10:42.899726 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 14:10:42.899807 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 14:10:42.899886 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 30 14:10:42.899964 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 14:10:42.900035 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 30 14:10:42.900106 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 30 14:10:42.900182 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 14:10:42.900251 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 30 14:10:42.900332 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 14:10:42.900431 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 30 14:10:42.900514 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 30 14:10:42.900604 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 14:10:42.900721 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 30 14:10:42.900802 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 14:10:42.900892 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 14:10:42.900996 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 30 14:10:42.901069 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 30 14:10:42.901139 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 14:10:42.901212 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 30 14:10:42.901282 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 30 14:10:42.901349 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 30 14:10:42.902528 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 30 14:10:42.902616 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 30 14:10:42.902731 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 30 14:10:42.902808 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 14:10:42.902876 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 30 14:10:42.902941 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 30 14:10:42.903010 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 14:10:42.903082 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 30 14:10:42.903150 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 30 14:10:42.903229 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 14:10:42.903299 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 30 14:10:42.903365 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 30 14:10:42.904502 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 14:10:42.904587 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 30 14:10:42.904698 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 30 14:10:42.904788 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 14:10:42.904860 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 30 14:10:42.904943 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 30 14:10:42.905017 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 14:10:42.905085 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 30 14:10:42.905152 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 30 14:10:42.905225 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 14:10:42.905295 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 30 14:10:42.905360 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 30 14:10:42.906543 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 30 14:10:42.906629 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:10:42.906757 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 30 14:10:42.906828 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:10:42.906900 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 30 14:10:42.906975 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:10:42.907043 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 30 14:10:42.907108 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:10:42.907179 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 30 14:10:42.907250 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:10:42.907320 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 30 14:10:42.907388 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:10:42.908799 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 30 14:10:42.908872 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:10:42.908941 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 30 14:10:42.909008 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:10:42.909075 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 30 14:10:42.909141 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:10:42.909214 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 30 14:10:42.909287 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 30 14:10:42.909354 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 30 14:10:42.909458 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 14:10:42.909531 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 30 14:10:42.909605 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 14:10:42.909688 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 30 14:10:42.909755 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 14:10:42.909822 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 30 14:10:42.909894 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 14:10:42.909961 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 30 14:10:42.910026 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 14:10:42.910093 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 30 14:10:42.910157 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 14:10:42.910224 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 30 14:10:42.910290 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 14:10:42.910358 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 30 14:10:42.912517 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 14:10:42.912608 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 30 14:10:42.912695 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 30 14:10:42.912770 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 30 14:10:42.912844 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 30 14:10:42.912913 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 14:10:42.912979 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 30 14:10:42.913047 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 14:10:42.913122 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 14:10:42.913186 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 30 14:10:42.913251 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:10:42.913324 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 30 14:10:42.913431 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 14:10:42.913508 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 14:10:42.913575 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 30 14:10:42.913674 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:10:42.913763 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 14:10:42.913862 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 30 14:10:42.913933 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 14:10:42.913998 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 14:10:42.914068 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 30 14:10:42.914134 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:10:42.914219 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 14:10:42.914286 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 14:10:42.914354 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 14:10:42.914435 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 30 14:10:42.914503 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:10:42.914577 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 30 14:10:42.914661 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 30 14:10:42.914732 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 14:10:42.914797 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 14:10:42.914861 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 30 14:10:42.914926 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:10:42.915003 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 30 14:10:42.915071 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 30 14:10:42.915140 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 14:10:42.915211 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 14:10:42.915276 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 30 14:10:42.915343 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:10:42.916152 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 30 14:10:42.916249 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 30 14:10:42.916323 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 30 14:10:42.916392 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 14:10:42.916649 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 14:10:42.916738 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 30 14:10:42.916806 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:10:42.916875 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 14:10:42.916940 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 14:10:42.917004 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 30 14:10:42.917068 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:10:42.917137 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 14:10:42.917203 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 30 14:10:42.917270 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 30 14:10:42.917335 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:10:42.917495 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 14:10:42.917564 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 14:10:42.917626 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 14:10:42.917712 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 14:10:42.917778 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 30 14:10:42.917844 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 14:10:42.917920 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 30 14:10:42.917980 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 30 14:10:42.918040 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 14:10:42.918109 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 30 14:10:42.918173 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 30 14:10:42.918238 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 14:10:42.918305 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 14:10:42.918366 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 30 14:10:42.918457 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 14:10:42.918528 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 30 14:10:42.918591 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 30 14:10:42.918698 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 14:10:42.918777 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 30 14:10:42.918840 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 30 14:10:42.918901 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 14:10:42.918969 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 30 14:10:42.919036 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 30 14:10:42.919097 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 14:10:42.919168 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 30 14:10:42.919233 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 30 14:10:42.919296 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 14:10:42.919370 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 30 14:10:42.919509 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 30 14:10:42.919582 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 14:10:42.919592 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 14:10:42.919601 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 14:10:42.919609 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 14:10:42.919617 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 14:10:42.919625 kernel: iommu: Default domain type: Translated Jan 30 14:10:42.919634 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:10:42.919655 kernel: efivars: Registered efivars operations Jan 30 14:10:42.919667 kernel: vgaarb: loaded Jan 30 14:10:42.919675 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:10:42.919683 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:10:42.919690 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:10:42.919698 kernel: pnp: PnP ACPI init Jan 30 14:10:42.919787 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 14:10:42.919799 kernel: pnp: PnP ACPI: found 1 devices Jan 30 14:10:42.919807 kernel: NET: Registered PF_INET protocol family Jan 30 14:10:42.919815 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:10:42.919826 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:10:42.919834 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:10:42.919842 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:10:42.919850 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:10:42.919858 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:10:42.919866 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:42.919874 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:10:42.919882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:10:42.919962 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 30 14:10:42.919974 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:10:42.919982 kernel: kvm [1]: HYP mode not available Jan 30 14:10:42.919990 kernel: Initialise system trusted keyrings Jan 30 14:10:42.919998 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:10:42.920006 kernel: Key type asymmetric registered Jan 30 14:10:42.920013 kernel: Asymmetric key parser 'x509' registered Jan 30 14:10:42.920021 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:10:42.920029 kernel: io scheduler mq-deadline registered Jan 30 14:10:42.920040 kernel: io scheduler kyber registered Jan 30 14:10:42.920048 kernel: io scheduler bfq registered Jan 30 14:10:42.920056 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 14:10:42.920125 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 30 14:10:42.920192 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 30 14:10:42.920258 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.920326 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 30 14:10:42.920407 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 30 14:10:42.920482 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.920551 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 30 14:10:42.920621 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 30 14:10:42.920699 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.920771 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 30 14:10:42.920842 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 30 14:10:42.920909 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.920979 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 30 14:10:42.921047 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 30 14:10:42.921112 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.921182 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 30 14:10:42.921251 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 30 14:10:42.921319 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.921405 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 30 14:10:42.921479 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 30 14:10:42.921547 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.921617 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 30 14:10:42.921736 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 30 14:10:42.921813 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.921824 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 30 14:10:42.921896 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 30 14:10:42.921965 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 30 14:10:42.922033 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 14:10:42.922044 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 14:10:42.922057 kernel: ACPI: button: Power Button [PWRB] Jan 30 14:10:42.922066 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 14:10:42.922138 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 30 14:10:42.922213 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 30 14:10:42.922289 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 30 14:10:42.922301 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:10:42.922310 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 14:10:42.922379 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 30 14:10:42.922393 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 30 14:10:42.924455 kernel: thunder_xcv, ver 1.0 Jan 30 14:10:42.924464 kernel: thunder_bgx, ver 1.0 Jan 30 14:10:42.924472 kernel: nicpf, ver 1.0 Jan 30 14:10:42.924481 kernel: nicvf, ver 1.0 Jan 30 14:10:42.924625 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:10:42.924723 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:10:42 UTC (1738246242) Jan 30 14:10:42.924736 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:10:42.924752 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 14:10:42.924761 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:10:42.924769 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:10:42.924777 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:10:42.924785 kernel: Segment Routing with IPv6 Jan 30 14:10:42.924793 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:10:42.924801 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:10:42.924809 kernel: Key type dns_resolver registered Jan 30 14:10:42.924817 kernel: registered taskstats version 1 Jan 30 14:10:42.924826 kernel: Loading compiled-in X.509 certificates Jan 30 14:10:42.924835 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:10:42.924843 kernel: Key type .fscrypt registered Jan 30 14:10:42.924851 kernel: Key type fscrypt-provisioning registered Jan 30 14:10:42.924859 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:10:42.924867 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:10:42.924876 kernel: ima: No architecture policies found Jan 30 14:10:42.924884 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:10:42.924892 kernel: clk: Disabling unused clocks Jan 30 14:10:42.924902 kernel: Freeing unused kernel memory: 39360K Jan 30 14:10:42.924910 kernel: Run /init as init process Jan 30 14:10:42.924918 kernel: with arguments: Jan 30 14:10:42.924926 kernel: /init Jan 30 14:10:42.924933 kernel: with environment: Jan 30 14:10:42.924941 kernel: HOME=/ Jan 30 14:10:42.924949 kernel: TERM=linux Jan 30 14:10:42.924957 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:10:42.924967 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:42.924979 systemd[1]: Detected virtualization kvm. Jan 30 14:10:42.924988 systemd[1]: Detected architecture arm64. Jan 30 14:10:42.924996 systemd[1]: Running in initrd. Jan 30 14:10:42.925005 systemd[1]: No hostname configured, using default hostname. Jan 30 14:10:42.925013 systemd[1]: Hostname set to . Jan 30 14:10:42.925024 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:10:42.925032 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:10:42.925042 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:42.925051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:42.925061 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:10:42.925070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:42.925078 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:10:42.925087 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:10:42.925097 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:10:42.925108 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:10:42.925117 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:42.925125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:42.925134 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:42.925142 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:42.925152 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:42.925160 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:42.925169 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:42.925179 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:42.925188 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:42.925197 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:42.925205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:42.925214 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:42.925223 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:42.925231 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:42.925240 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:10:42.925249 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:42.925259 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:10:42.925268 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:10:42.925276 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:42.925285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:42.925294 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:42.925302 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:42.925311 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:42.925320 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:10:42.925377 systemd-journald[236]: Collecting audit messages is disabled. Jan 30 14:10:42.927474 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:42.927487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:42.927497 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:42.927506 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:42.927515 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:10:42.927524 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:42.927533 kernel: Bridge firewalling registered Jan 30 14:10:42.927542 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:42.927559 systemd-journald[236]: Journal started Jan 30 14:10:42.927581 systemd-journald[236]: Runtime Journal (/run/log/journal/e061efef21604a4d942649e349fd975e) is 8.0M, max 76.5M, 68.5M free. Jan 30 14:10:42.886499 systemd-modules-load[237]: Inserted module 'overlay' Jan 30 14:10:42.929778 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:42.915439 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 30 14:10:42.931507 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:42.932058 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:42.935479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:42.944592 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:10:42.946631 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:42.948665 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:42.961848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:42.965131 dracut-cmdline[265]: dracut-dracut-053 Jan 30 14:10:42.968712 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:42.972233 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:10:43.003866 systemd-resolved[277]: Positive Trust Anchors: Jan 30 14:10:43.003884 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:43.003918 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:43.009550 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 30 14:10:43.010675 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:43.011309 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:43.065436 kernel: SCSI subsystem initialized Jan 30 14:10:43.069420 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:10:43.077429 kernel: iscsi: registered transport (tcp) Jan 30 14:10:43.090659 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:10:43.090779 kernel: QLogic iSCSI HBA Driver Jan 30 14:10:43.130767 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:43.137632 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:10:43.158857 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:10:43.158941 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:10:43.158954 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:10:43.208489 kernel: raid6: neonx8 gen() 15616 MB/s Jan 30 14:10:43.225445 kernel: raid6: neonx4 gen() 15520 MB/s Jan 30 14:10:43.242453 kernel: raid6: neonx2 gen() 13136 MB/s Jan 30 14:10:43.259442 kernel: raid6: neonx1 gen() 10379 MB/s Jan 30 14:10:43.276456 kernel: raid6: int64x8 gen() 6919 MB/s Jan 30 14:10:43.293529 kernel: raid6: int64x4 gen() 7272 MB/s Jan 30 14:10:43.310454 kernel: raid6: int64x2 gen() 6061 MB/s Jan 30 14:10:43.327453 kernel: raid6: int64x1 gen() 5008 MB/s Jan 30 14:10:43.327540 kernel: raid6: using algorithm neonx8 gen() 15616 MB/s Jan 30 14:10:43.344473 kernel: raid6: .... xor() 11819 MB/s, rmw enabled Jan 30 14:10:43.344554 kernel: raid6: using neon recovery algorithm Jan 30 14:10:43.349614 kernel: xor: measuring software checksum speed Jan 30 14:10:43.349698 kernel: 8regs : 19778 MB/sec Jan 30 14:10:43.349722 kernel: 32regs : 19683 MB/sec Jan 30 14:10:43.350444 kernel: arm64_neon : 22758 MB/sec Jan 30 14:10:43.350486 kernel: xor: using function: arm64_neon (22758 MB/sec) Jan 30 14:10:43.401471 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:10:43.415717 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:43.422584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:43.444864 systemd-udevd[453]: Using default interface naming scheme 'v255'. Jan 30 14:10:43.449048 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:43.461430 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:10:43.476936 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Jan 30 14:10:43.517440 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:43.521599 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:43.575135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:43.585596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:10:43.609787 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:43.611761 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:43.613745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:43.614970 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:43.620577 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:10:43.644515 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:43.677786 kernel: scsi host0: Virtio SCSI HBA Jan 30 14:10:43.689955 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 14:10:43.690053 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 14:10:43.697378 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:43.698380 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:43.724025 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:43.724777 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:43.724947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:43.725946 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:43.734811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:43.742703 kernel: ACPI: bus type USB registered Jan 30 14:10:43.743840 kernel: usbcore: registered new interface driver usbfs Jan 30 14:10:43.743885 kernel: usbcore: registered new interface driver hub Jan 30 14:10:43.744408 kernel: usbcore: registered new device driver usb Jan 30 14:10:43.749959 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:43.757914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:10:43.766028 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 30 14:10:43.770204 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 30 14:10:43.770335 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:10:43.770347 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 30 14:10:43.782441 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 14:10:43.795547 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 14:10:43.795711 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 14:10:43.795815 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 14:10:43.795901 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 14:10:43.795980 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 14:10:43.796062 kernel: hub 1-0:1.0: USB hub found Jan 30 14:10:43.796164 kernel: hub 1-0:1.0: 4 ports detected Jan 30 14:10:43.796248 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 14:10:43.796343 kernel: hub 2-0:1.0: USB hub found Jan 30 14:10:43.796492 kernel: hub 2-0:1.0: 4 ports detected Jan 30 14:10:43.795656 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:43.799762 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 30 14:10:43.807477 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 14:10:43.807629 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 30 14:10:43.807752 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 30 14:10:43.807881 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:10:43.807997 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:10:43.808009 kernel: GPT:17805311 != 80003071 Jan 30 14:10:43.808020 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:10:43.808031 kernel: GPT:17805311 != 80003071 Jan 30 14:10:43.808042 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:10:43.808052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:43.808063 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 30 14:10:43.849418 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (503) Jan 30 14:10:43.852247 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 14:10:43.856632 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (512) Jan 30 14:10:43.857367 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 14:10:43.875065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 14:10:43.880871 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 14:10:43.882540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 14:10:43.892712 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:10:43.899624 disk-uuid[571]: Primary Header is updated. Jan 30 14:10:43.899624 disk-uuid[571]: Secondary Entries is updated. Jan 30 14:10:43.899624 disk-uuid[571]: Secondary Header is updated. Jan 30 14:10:43.904444 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:44.029632 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 14:10:44.274505 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 30 14:10:44.410803 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 30 14:10:44.410862 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 14:10:44.412367 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 30 14:10:44.465559 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 30 14:10:44.465916 kernel: usbcore: registered new interface driver usbhid Jan 30 14:10:44.466569 kernel: usbhid: USB HID core driver Jan 30 14:10:44.919340 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:10:44.919417 disk-uuid[572]: The operation has completed successfully. Jan 30 14:10:44.971823 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:10:44.971924 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:10:44.985623 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:10:45.001101 sh[591]: Success Jan 30 14:10:45.015424 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:10:45.080172 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:10:45.082540 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:10:45.085325 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:10:45.107465 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:10:45.107530 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:45.107544 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:10:45.107561 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:10:45.107573 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:10:45.114458 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:10:45.116341 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:10:45.117557 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:10:45.128800 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:10:45.134886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:10:45.151642 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:45.151698 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:45.151718 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:45.155490 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:10:45.155550 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:45.168383 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:10:45.169462 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:45.178031 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:10:45.186890 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:10:45.267822 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:45.281677 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:45.301350 ignition[687]: Ignition 2.19.0 Jan 30 14:10:45.301990 ignition[687]: Stage: fetch-offline Jan 30 14:10:45.302042 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:45.302051 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:45.304824 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:45.302732 ignition[687]: parsed url from cmdline: "" Jan 30 14:10:45.302737 ignition[687]: no config URL provided Jan 30 14:10:45.302743 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:45.302756 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:45.302762 ignition[687]: failed to fetch config: resource requires networking Jan 30 14:10:45.302982 ignition[687]: Ignition finished successfully Jan 30 14:10:45.311888 systemd-networkd[778]: lo: Link UP Jan 30 14:10:45.311892 systemd-networkd[778]: lo: Gained carrier Jan 30 14:10:45.313521 systemd-networkd[778]: Enumeration completed Jan 30 14:10:45.313679 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:45.314508 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:45.314511 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:45.314821 systemd[1]: Reached target network.target - Network. Jan 30 14:10:45.316023 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:45.316026 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:45.316611 systemd-networkd[778]: eth0: Link UP Jan 30 14:10:45.316614 systemd-networkd[778]: eth0: Gained carrier Jan 30 14:10:45.316621 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:45.321890 systemd-networkd[778]: eth1: Link UP Jan 30 14:10:45.321894 systemd-networkd[778]: eth1: Gained carrier Jan 30 14:10:45.321903 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:45.323130 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:10:45.341112 ignition[781]: Ignition 2.19.0 Jan 30 14:10:45.341129 ignition[781]: Stage: fetch Jan 30 14:10:45.341364 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:45.341378 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:45.341532 ignition[781]: parsed url from cmdline: "" Jan 30 14:10:45.341535 ignition[781]: no config URL provided Jan 30 14:10:45.341540 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:10:45.341548 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:10:45.341572 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 14:10:45.342284 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 14:10:45.361528 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:10:45.384512 systemd-networkd[778]: eth0: DHCPv4 address 138.199.157.113/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 14:10:45.542385 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 14:10:45.549301 ignition[781]: GET result: OK Jan 30 14:10:45.549425 ignition[781]: parsing config with SHA512: a23548007eb19744bb1502417b36071ae54c33148964476dac75f988d415d73cabddd84519388bb9e5e54e3fb9770c7598d7e8049c3fdc0a6703c90d90e053d4 Jan 30 14:10:45.554623 unknown[781]: fetched base config from "system" Jan 30 14:10:45.554644 unknown[781]: fetched base config from "system" Jan 30 14:10:45.555102 ignition[781]: fetch: fetch complete Jan 30 14:10:45.554649 unknown[781]: fetched user config from "hetzner" Jan 30 14:10:45.555106 ignition[781]: fetch: fetch passed Jan 30 14:10:45.555151 ignition[781]: Ignition finished successfully Jan 30 14:10:45.558422 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:10:45.564608 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:10:45.582891 ignition[789]: Ignition 2.19.0 Jan 30 14:10:45.582911 ignition[789]: Stage: kargs Jan 30 14:10:45.583106 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:45.583116 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:45.584176 ignition[789]: kargs: kargs passed Jan 30 14:10:45.584227 ignition[789]: Ignition finished successfully Jan 30 14:10:45.588208 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:10:45.594601 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:10:45.608281 ignition[795]: Ignition 2.19.0 Jan 30 14:10:45.608291 ignition[795]: Stage: disks Jan 30 14:10:45.608490 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:45.608510 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:45.609566 ignition[795]: disks: disks passed Jan 30 14:10:45.609617 ignition[795]: Ignition finished successfully Jan 30 14:10:45.611888 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:10:45.613795 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:45.615480 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:45.616537 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:45.617112 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:45.618427 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:45.626712 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:10:45.649109 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 14:10:45.653381 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:10:45.659691 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:10:45.705415 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:10:45.706499 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:10:45.708278 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:10:45.715554 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:45.719539 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:10:45.728905 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 14:10:45.732488 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:10:45.732527 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:45.744977 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (811) Jan 30 14:10:45.745205 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:45.745237 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:45.745263 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:45.738776 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:10:45.748672 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:10:45.754381 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:10:45.754454 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:45.759175 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:45.793815 coreos-metadata[813]: Jan 30 14:10:45.793 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 14:10:45.795620 coreos-metadata[813]: Jan 30 14:10:45.795 INFO Fetch successful Jan 30 14:10:45.797351 coreos-metadata[813]: Jan 30 14:10:45.797 INFO wrote hostname ci-4081-3-0-0-5370901337 to /sysroot/etc/hostname Jan 30 14:10:45.801053 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:45.804782 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:10:45.813511 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:10:45.819870 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:10:45.825780 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:10:45.925607 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:45.931560 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:10:45.934898 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:10:45.941439 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:45.964649 ignition[928]: INFO : Ignition 2.19.0 Jan 30 14:10:45.964649 ignition[928]: INFO : Stage: mount Jan 30 14:10:45.968161 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:45.968161 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:45.968161 ignition[928]: INFO : mount: mount passed Jan 30 14:10:45.968161 ignition[928]: INFO : Ignition finished successfully Jan 30 14:10:45.970435 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:10:45.976551 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:10:45.978191 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:10:46.107011 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:10:46.115621 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:10:46.128606 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) Jan 30 14:10:46.128686 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:10:46.129525 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:10:46.129566 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:10:46.133611 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:10:46.133705 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:10:46.135750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:10:46.160584 ignition[956]: INFO : Ignition 2.19.0 Jan 30 14:10:46.160584 ignition[956]: INFO : Stage: files Jan 30 14:10:46.163502 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:46.163502 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:46.163502 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:10:46.165847 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:10:46.165847 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:10:46.168973 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:10:46.170125 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:10:46.171488 unknown[956]: wrote ssh authorized keys file for user: core Jan 30 14:10:46.172325 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:10:46.175816 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:10:46.175816 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:10:46.175816 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:46.175816 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:10:46.223075 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:10:46.348918 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:10:46.350179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:10:46.350179 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 14:10:46.840667 systemd-networkd[778]: eth0: Gained IPv6LL Jan 30 14:10:47.032740 systemd-networkd[778]: eth1: Gained IPv6LL Jan 30 14:10:47.161255 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Jan 30 14:10:47.435451 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 14:10:47.435451 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:47.441471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:10:47.819443 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Jan 30 14:10:48.151591 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:10:48.151591 ignition[956]: INFO : files: op(d): [started] processing unit "containerd.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(d): [finished] processing unit "containerd.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(11): op(12): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(11): op(12): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:10:48.154371 ignition[956]: INFO : files: createResultFile: createFiles: op(14): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:48.154371 ignition[956]: INFO : files: createResultFile: createFiles: op(14): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:10:48.154371 ignition[956]: INFO : files: files passed Jan 30 14:10:48.154371 ignition[956]: INFO : Ignition finished successfully Jan 30 14:10:48.160433 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:10:48.168636 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:10:48.171812 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:10:48.175569 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:10:48.175723 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:10:48.185988 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:48.185988 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:48.188343 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:10:48.190846 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:48.191714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:10:48.197557 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:10:48.228613 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:10:48.230145 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:10:48.233278 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:10:48.234748 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:10:48.236553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:10:48.242680 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:10:48.255436 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:48.262597 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:10:48.277390 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:48.278852 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:48.280145 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:10:48.280738 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:10:48.280866 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:10:48.283373 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:10:48.284501 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:10:48.285701 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:10:48.286632 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:10:48.287590 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:10:48.288601 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:10:48.289539 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:10:48.290684 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:10:48.291696 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:10:48.292594 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:10:48.293411 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:10:48.293590 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:10:48.294818 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:48.295953 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:48.297028 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:10:48.297526 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:48.298301 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:10:48.298487 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:10:48.300035 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:10:48.300166 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:10:48.301367 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:10:48.301554 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:10:48.302362 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 14:10:48.302528 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 14:10:48.314928 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:10:48.316872 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:10:48.317155 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:48.321705 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:10:48.322780 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:10:48.322933 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:48.325082 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:10:48.325522 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:10:48.333809 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:10:48.334471 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:10:48.343425 ignition[1009]: INFO : Ignition 2.19.0 Jan 30 14:10:48.343425 ignition[1009]: INFO : Stage: umount Jan 30 14:10:48.343425 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:10:48.343425 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 14:10:48.358375 ignition[1009]: INFO : umount: umount passed Jan 30 14:10:48.358375 ignition[1009]: INFO : Ignition finished successfully Jan 30 14:10:48.345739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:10:48.352007 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:10:48.352105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:10:48.354779 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:10:48.354877 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:10:48.363231 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:10:48.363299 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:10:48.364087 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:10:48.364132 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:10:48.370699 systemd[1]: Stopped target network.target - Network. Jan 30 14:10:48.371170 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:10:48.371234 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:10:48.373577 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:10:48.374091 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:10:48.374464 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:48.375588 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:10:48.376729 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:10:48.378137 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:10:48.378184 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:10:48.379767 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:10:48.379803 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:10:48.386579 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:10:48.386673 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:10:48.390275 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:10:48.390377 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:10:48.395947 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:10:48.396768 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:10:48.397958 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:10:48.399544 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:10:48.400672 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:10:48.400768 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:10:48.404463 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 30 14:10:48.408018 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:10:48.408151 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:10:48.409169 systemd-networkd[778]: eth1: DHCPv6 lease lost Jan 30 14:10:48.413930 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:10:48.414645 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:10:48.416636 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:10:48.417176 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:48.429795 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:10:48.431191 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:10:48.431271 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:10:48.432774 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:10:48.432857 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:48.433967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:10:48.434010 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:48.435085 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:10:48.435131 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:48.436676 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:48.452977 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:10:48.453123 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:10:48.455686 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:10:48.455871 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:48.457597 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:10:48.457658 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:48.458700 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:10:48.458734 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:48.459649 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:10:48.459697 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:10:48.461412 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:10:48.461464 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:10:48.462962 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:10:48.463026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:10:48.471680 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:10:48.472228 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:10:48.472287 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:48.474440 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:10:48.474490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:48.481358 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:10:48.481527 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:10:48.483124 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:10:48.497778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:10:48.510120 systemd[1]: Switching root. Jan 30 14:10:48.537354 systemd-journald[236]: Journal stopped Jan 30 14:10:49.488943 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 30 14:10:49.489018 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:10:49.489031 kernel: SELinux: policy capability open_perms=1 Jan 30 14:10:49.489040 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:10:49.489054 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:10:49.489063 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:10:49.489078 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:10:49.489087 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:10:49.489100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:10:49.489110 kernel: audit: type=1403 audit(1738246248.709:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:10:49.489121 systemd[1]: Successfully loaded SELinux policy in 34.950ms. Jan 30 14:10:49.489144 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.062ms. Jan 30 14:10:49.489157 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:10:49.489168 systemd[1]: Detected virtualization kvm. Jan 30 14:10:49.489178 systemd[1]: Detected architecture arm64. Jan 30 14:10:49.489190 systemd[1]: Detected first boot. Jan 30 14:10:49.489201 systemd[1]: Hostname set to . Jan 30 14:10:49.489211 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:10:49.489221 zram_generator::config[1071]: No configuration found. Jan 30 14:10:49.489232 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:10:49.489243 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:10:49.489254 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 14:10:49.489265 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:10:49.489277 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:10:49.489287 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:10:49.489298 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:10:49.489308 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:10:49.489320 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:10:49.489330 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:10:49.489340 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:10:49.489350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:10:49.489363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:10:49.489373 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:10:49.489384 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:10:49.489986 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:10:49.490024 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:10:49.490038 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 14:10:49.490050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:10:49.490063 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:10:49.490075 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:10:49.490093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:10:49.490106 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:10:49.490120 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:10:49.490132 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:10:49.490145 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:10:49.490158 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:10:49.490170 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:10:49.490185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:10:49.490198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:10:49.490211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:10:49.490224 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:10:49.490236 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:10:49.490249 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:10:49.490265 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:10:49.490281 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:10:49.490307 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:10:49.490323 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:10:49.490336 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:10:49.490350 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:49.490363 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:10:49.490376 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:10:49.490389 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:49.490457 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:49.490473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:49.493023 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:10:49.493063 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:49.493076 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:10:49.493088 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 14:10:49.493099 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 14:10:49.493115 kernel: fuse: init (API version 7.39) Jan 30 14:10:49.493128 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:10:49.493139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:10:49.493150 kernel: ACPI: bus type drm_connector registered Jan 30 14:10:49.493160 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:10:49.493171 kernel: loop: module loaded Jan 30 14:10:49.493181 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:10:49.493192 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:10:49.493203 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:10:49.493215 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:10:49.493226 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:10:49.493236 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:10:49.493247 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:10:49.493258 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:10:49.493269 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:10:49.493313 systemd-journald[1161]: Collecting audit messages is disabled. Jan 30 14:10:49.493342 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:10:49.493355 systemd-journald[1161]: Journal started Jan 30 14:10:49.493377 systemd-journald[1161]: Runtime Journal (/run/log/journal/e061efef21604a4d942649e349fd975e) is 8.0M, max 76.5M, 68.5M free. Jan 30 14:10:49.494776 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:10:49.494824 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:10:49.497117 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:10:49.498245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:49.498507 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:49.499372 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:49.499535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:49.500744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:49.500899 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:49.501822 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:10:49.501974 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:10:49.502844 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:49.506114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:49.507922 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:10:49.508866 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:10:49.511377 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:10:49.523393 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:10:49.530566 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:10:49.533515 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:10:49.534145 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:10:49.545681 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:10:49.553285 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:10:49.557519 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:49.563606 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:10:49.564386 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:49.567357 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:10:49.583002 systemd-journald[1161]: Time spent on flushing to /var/log/journal/e061efef21604a4d942649e349fd975e is 39.834ms for 1113 entries. Jan 30 14:10:49.583002 systemd-journald[1161]: System Journal (/var/log/journal/e061efef21604a4d942649e349fd975e) is 8.0M, max 584.8M, 576.8M free. Jan 30 14:10:49.630646 systemd-journald[1161]: Received client request to flush runtime journal. Jan 30 14:10:49.584823 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:10:49.590336 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:10:49.591535 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:10:49.609691 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:10:49.618837 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:10:49.621452 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:10:49.622192 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:10:49.636883 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:10:49.645078 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:10:49.652216 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:10:49.659337 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 30 14:10:49.659358 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jan 30 14:10:49.664041 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:10:49.673716 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:10:49.710248 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:10:49.717783 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:10:49.736791 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 30 14:10:49.736811 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 30 14:10:49.741195 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:10:50.147797 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:10:50.153747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:10:50.178067 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Jan 30 14:10:50.200785 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:10:50.213057 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:10:50.228585 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:10:50.265204 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 30 14:10:50.288313 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:10:50.365699 systemd-networkd[1241]: lo: Link UP Jan 30 14:10:50.365709 systemd-networkd[1241]: lo: Gained carrier Jan 30 14:10:50.367359 systemd-networkd[1241]: Enumeration completed Jan 30 14:10:50.367515 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:10:50.368001 systemd-networkd[1241]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.368006 systemd-networkd[1241]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:50.368779 systemd-networkd[1241]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.368782 systemd-networkd[1241]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:10:50.369283 systemd-networkd[1241]: eth0: Link UP Jan 30 14:10:50.369287 systemd-networkd[1241]: eth0: Gained carrier Jan 30 14:10:50.369301 systemd-networkd[1241]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.375675 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:10:50.377893 systemd-networkd[1241]: eth1: Link UP Jan 30 14:10:50.377906 systemd-networkd[1241]: eth1: Gained carrier Jan 30 14:10:50.377926 systemd-networkd[1241]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.408168 systemd-networkd[1241]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.440459 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1244) Jan 30 14:10:50.441847 systemd-networkd[1241]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:10:50.446417 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:10:50.449573 systemd-networkd[1241]: eth0: DHCPv4 address 138.199.157.113/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 14:10:50.450781 systemd-networkd[1241]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:10:50.506752 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 14:10:50.511153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:50.519269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:50.536168 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 30 14:10:50.536234 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 14:10:50.536248 kernel: [drm] features: -context_init Jan 30 14:10:50.534704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:50.538008 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:50.539798 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:10:50.539841 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:10:50.540243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:50.540435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:50.545788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:50.545972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:50.547461 kernel: [drm] number of scanouts: 1 Jan 30 14:10:50.547504 kernel: [drm] number of cap sets: 0 Jan 30 14:10:50.548416 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 14:10:50.553095 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 14:10:50.553246 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:50.560473 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 14:10:50.568084 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:50.571733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:50.576688 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:50.590028 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:10:50.654840 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:10:50.712083 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:10:50.718728 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:10:50.733439 lvm[1302]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:50.765360 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:10:50.769223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:10:50.780710 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:10:50.785739 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:10:50.814697 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:10:50.817552 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:10:50.819196 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:10:50.819282 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:10:50.820045 systemd[1]: Reached target machines.target - Containers. Jan 30 14:10:50.822059 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:10:50.829781 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:10:50.834651 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:10:50.836634 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:50.838627 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:10:50.850535 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:10:50.865337 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:10:50.871816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:10:50.879816 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:10:50.893424 kernel: loop0: detected capacity change from 0 to 114328 Jan 30 14:10:50.894748 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:10:50.896693 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:10:50.921848 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:10:50.940441 kernel: loop1: detected capacity change from 0 to 8 Jan 30 14:10:50.961434 kernel: loop2: detected capacity change from 0 to 114432 Jan 30 14:10:50.996893 kernel: loop3: detected capacity change from 0 to 194096 Jan 30 14:10:51.051445 kernel: loop4: detected capacity change from 0 to 114328 Jan 30 14:10:51.061517 kernel: loop5: detected capacity change from 0 to 8 Jan 30 14:10:51.062453 kernel: loop6: detected capacity change from 0 to 114432 Jan 30 14:10:51.071434 kernel: loop7: detected capacity change from 0 to 194096 Jan 30 14:10:51.084830 (sd-merge)[1326]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 14:10:51.085377 (sd-merge)[1326]: Merged extensions into '/usr'. Jan 30 14:10:51.090989 systemd[1]: Reloading requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:10:51.091008 systemd[1]: Reloading... Jan 30 14:10:51.179430 zram_generator::config[1354]: No configuration found. Jan 30 14:10:51.252493 ldconfig[1310]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:10:51.321200 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:51.378576 systemd[1]: Reloading finished in 287 ms. Jan 30 14:10:51.395987 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:10:51.397078 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:10:51.405685 systemd[1]: Starting ensure-sysext.service... Jan 30 14:10:51.418802 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:10:51.428875 systemd[1]: Reloading requested from client PID 1398 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:10:51.428895 systemd[1]: Reloading... Jan 30 14:10:51.442151 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:10:51.442454 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:10:51.443145 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:10:51.443371 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Jan 30 14:10:51.444152 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Jan 30 14:10:51.451912 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:51.452064 systemd-tmpfiles[1399]: Skipping /boot Jan 30 14:10:51.461871 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:10:51.462015 systemd-tmpfiles[1399]: Skipping /boot Jan 30 14:10:51.500435 zram_generator::config[1429]: No configuration found. Jan 30 14:10:51.610058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:10:51.667177 systemd[1]: Reloading finished in 237 ms. Jan 30 14:10:51.687501 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:10:51.701664 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:10:51.706591 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:10:51.710807 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:10:51.715494 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:10:51.723906 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:10:51.740333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:51.747145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:51.752275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:51.761442 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:10:51.764955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:51.776776 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:10:51.780252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:51.781429 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:51.787525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:51.787903 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:51.792083 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:10:51.796911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:51.797092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:51.806115 augenrules[1499]: No rules Jan 30 14:10:51.805236 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:10:51.810546 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:10:51.812867 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:10:51.819822 systemd[1]: Finished ensure-sysext.service. Jan 30 14:10:51.821209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:10:51.828748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:10:51.834666 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:10:51.837818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:10:51.843536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:10:51.851628 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:10:51.864455 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:10:51.870086 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:10:51.870269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:10:51.877069 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:10:51.880506 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:10:51.880968 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:10:51.881817 systemd-resolved[1478]: Positive Trust Anchors: Jan 30 14:10:51.881835 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:10:51.881870 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:10:51.884780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:10:51.884943 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:10:51.886282 systemd-resolved[1478]: Using system hostname 'ci-4081-3-0-0-5370901337'. Jan 30 14:10:51.890122 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:10:51.894781 systemd[1]: Reached target network.target - Network. Jan 30 14:10:51.895329 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:10:51.896040 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:10:51.896113 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:10:51.896139 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:10:51.907965 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:10:51.948564 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:10:51.951201 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:10:51.952291 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:10:51.953263 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:10:51.954414 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:10:51.955457 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:10:51.955508 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:10:51.956097 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:10:51.956938 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:10:51.957658 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:10:51.958284 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:10:51.959001 systemd-timesyncd[1521]: Contacted time server 188.245.97.96:123 (0.flatcar.pool.ntp.org). Jan 30 14:10:51.959067 systemd-timesyncd[1521]: Initial clock synchronization to Thu 2025-01-30 14:10:51.883129 UTC. Jan 30 14:10:51.960293 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:10:51.962597 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:10:51.964657 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:10:51.968005 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:10:51.968743 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:10:51.969335 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:10:51.970213 systemd[1]: System is tainted: cgroupsv1 Jan 30 14:10:51.970263 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:51.970290 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:10:51.973553 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:10:51.976686 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:10:51.981778 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:10:51.984568 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:10:51.989140 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:10:51.990699 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:10:51.997700 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:10:52.004053 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:10:52.010289 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 14:10:52.016561 jq[1540]: false Jan 30 14:10:52.033049 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:10:52.043852 dbus-daemon[1538]: [system] SELinux support is enabled Jan 30 14:10:52.060842 coreos-metadata[1537]: Jan 30 14:10:52.045 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 14:10:52.060842 coreos-metadata[1537]: Jan 30 14:10:52.054 INFO Fetch successful Jan 30 14:10:52.060842 coreos-metadata[1537]: Jan 30 14:10:52.054 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 14:10:52.060842 coreos-metadata[1537]: Jan 30 14:10:52.056 INFO Fetch successful Jan 30 14:10:52.063660 extend-filesystems[1541]: Found loop4 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found loop5 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found loop6 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found loop7 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found sda Jan 30 14:10:52.063660 extend-filesystems[1541]: Found sda1 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found sda2 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found sda3 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found usr Jan 30 14:10:52.063660 extend-filesystems[1541]: Found sda4 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found sda6 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found sda7 Jan 30 14:10:52.063660 extend-filesystems[1541]: Found sda9 Jan 30 14:10:52.063660 extend-filesystems[1541]: Checking size of /dev/sda9 Jan 30 14:10:52.045674 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:10:52.157808 extend-filesystems[1541]: Resized partition /dev/sda9 Jan 30 14:10:52.061713 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:10:52.167776 extend-filesystems[1587]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:10:52.173611 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 14:10:52.065637 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:10:52.069705 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:10:52.080176 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:10:52.174027 update_engine[1559]: I20250130 14:10:52.126603 1559 main.cc:92] Flatcar Update Engine starting Jan 30 14:10:52.174027 update_engine[1559]: I20250130 14:10:52.133603 1559 update_check_scheduler.cc:74] Next update check in 8m0s Jan 30 14:10:52.082458 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:10:52.174364 jq[1564]: true Jan 30 14:10:52.092104 systemd-networkd[1241]: eth0: Gained IPv6LL Jan 30 14:10:52.096330 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:10:52.096602 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:10:52.120794 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:10:52.131549 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:10:52.131806 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:10:52.153207 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:10:52.173148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:10:52.180612 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:10:52.181234 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:10:52.181266 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:10:52.210813 jq[1581]: true Jan 30 14:10:52.182529 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:10:52.182548 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:10:52.187073 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:10:52.187315 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:10:52.208033 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:10:52.209679 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:10:52.212675 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:10:52.222586 tar[1568]: linux-arm64/helm Jan 30 14:10:52.223828 systemd-networkd[1241]: eth1: Gained IPv6LL Jan 30 14:10:52.228053 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:10:52.291984 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:10:52.294863 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:10:52.299694 systemd-logind[1556]: New seat seat0. Jan 30 14:10:52.388643 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:10:52.398271 systemd-logind[1556]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 14:10:52.398293 systemd-logind[1556]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 30 14:10:52.403537 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:10:52.398532 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:10:52.410529 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:10:52.433080 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1238) Jan 30 14:10:52.436804 systemd[1]: Starting sshkeys.service... Jan 30 14:10:52.453164 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 14:10:52.488473 extend-filesystems[1587]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 14:10:52.488473 extend-filesystems[1587]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 14:10:52.488473 extend-filesystems[1587]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 14:10:52.487289 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:10:52.498366 extend-filesystems[1541]: Resized filesystem in /dev/sda9 Jan 30 14:10:52.498366 extend-filesystems[1541]: Found sr0 Jan 30 14:10:52.487597 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:10:52.501342 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:10:52.517153 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:10:52.556416 containerd[1593]: time="2025-01-30T14:10:52.553908798Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:10:52.588610 coreos-metadata[1649]: Jan 30 14:10:52.588 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 14:10:52.592757 coreos-metadata[1649]: Jan 30 14:10:52.589 INFO Fetch successful Jan 30 14:10:52.595955 unknown[1649]: wrote ssh authorized keys file for user: core Jan 30 14:10:52.621206 sshd_keygen[1585]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:10:52.629683 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:10:52.640801 update-ssh-keys[1655]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:10:52.638800 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:10:52.649037 systemd[1]: Finished sshkeys.service. Jan 30 14:10:52.660669 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:10:52.661764 containerd[1593]: time="2025-01-30T14:10:52.661520403Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.665969741Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666017881Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666036304Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666224703Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666261590Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666342061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666359732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666586563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666604115Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666618221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667431 containerd[1593]: time="2025-01-30T14:10:52.666629909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667707 containerd[1593]: time="2025-01-30T14:10:52.666716679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667707 containerd[1593]: time="2025-01-30T14:10:52.666906940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667707 containerd[1593]: time="2025-01-30T14:10:52.667037016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:10:52.667707 containerd[1593]: time="2025-01-30T14:10:52.667051201Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:10:52.667707 containerd[1593]: time="2025-01-30T14:10:52.667124341Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:10:52.667707 containerd[1593]: time="2025-01-30T14:10:52.667164042Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:10:52.669885 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:10:52.678457 containerd[1593]: time="2025-01-30T14:10:52.678085656Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:10:52.678457 containerd[1593]: time="2025-01-30T14:10:52.678162165Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:10:52.678457 containerd[1593]: time="2025-01-30T14:10:52.678180747Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:10:52.678457 containerd[1593]: time="2025-01-30T14:10:52.678201033Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:10:52.678457 containerd[1593]: time="2025-01-30T14:10:52.678218070Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:10:52.679095 containerd[1593]: time="2025-01-30T14:10:52.678382617Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:10:52.679668 containerd[1593]: time="2025-01-30T14:10:52.679605722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:10:52.679828 containerd[1593]: time="2025-01-30T14:10:52.679798202Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:10:52.679828 containerd[1593]: time="2025-01-30T14:10:52.679822173Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:10:52.679883 containerd[1593]: time="2025-01-30T14:10:52.679836397Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:10:52.679883 containerd[1593]: time="2025-01-30T14:10:52.679858149Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:10:52.679883 containerd[1593]: time="2025-01-30T14:10:52.679871659Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:10:52.679935 containerd[1593]: time="2025-01-30T14:10:52.679884774Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:10:52.679935 containerd[1593]: time="2025-01-30T14:10:52.679900464Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:10:52.679935 containerd[1593]: time="2025-01-30T14:10:52.679915322Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:10:52.679935 containerd[1593]: time="2025-01-30T14:10:52.679928872Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:10:52.680008 containerd[1593]: time="2025-01-30T14:10:52.679940759Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:10:52.680008 containerd[1593]: time="2025-01-30T14:10:52.679954032Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:10:52.680008 containerd[1593]: time="2025-01-30T14:10:52.679974318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.680008 containerd[1593]: time="2025-01-30T14:10:52.679989889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.680008 containerd[1593]: time="2025-01-30T14:10:52.680002964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.680127 containerd[1593]: time="2025-01-30T14:10:52.680016277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.680127 containerd[1593]: time="2025-01-30T14:10:52.680030065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.680127 containerd[1593]: time="2025-01-30T14:10:52.680044051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680234391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680271160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680293942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680346836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680367558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680696057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680723871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680751487Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680787701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680807828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.680827639Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.681081135Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.681122143Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:10:52.681231 containerd[1593]: time="2025-01-30T14:10:52.681137080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:10:52.681603 containerd[1593]: time="2025-01-30T14:10:52.681157604Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:10:52.681603 containerd[1593]: time="2025-01-30T14:10:52.681173492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.681603 containerd[1593]: time="2025-01-30T14:10:52.681191995Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:10:52.681603 containerd[1593]: time="2025-01-30T14:10:52.681204872Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:10:52.682167 containerd[1593]: time="2025-01-30T14:10:52.682134068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:10:52.682614 containerd[1593]: time="2025-01-30T14:10:52.682553616Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:10:52.682794 containerd[1593]: time="2025-01-30T14:10:52.682773633Z" level=info msg="Connect containerd service" Jan 30 14:10:52.682894 containerd[1593]: time="2025-01-30T14:10:52.682879857Z" level=info msg="using legacy CRI server" Jan 30 14:10:52.682962 containerd[1593]: time="2025-01-30T14:10:52.682948243Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:10:52.683145 containerd[1593]: time="2025-01-30T14:10:52.683125548Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:10:52.684009 containerd[1593]: time="2025-01-30T14:10:52.683974194Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:10:52.684654 containerd[1593]: time="2025-01-30T14:10:52.684629845Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:10:52.684773 containerd[1593]: time="2025-01-30T14:10:52.684750689Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:10:52.684938 containerd[1593]: time="2025-01-30T14:10:52.684914483Z" level=info msg="Start subscribing containerd event" Jan 30 14:10:52.685019 containerd[1593]: time="2025-01-30T14:10:52.685003353Z" level=info msg="Start recovering state" Jan 30 14:10:52.685201 containerd[1593]: time="2025-01-30T14:10:52.685182600Z" level=info msg="Start event monitor" Jan 30 14:10:52.685276 containerd[1593]: time="2025-01-30T14:10:52.685259861Z" level=info msg="Start snapshots syncer" Jan 30 14:10:52.686407 containerd[1593]: time="2025-01-30T14:10:52.685326623Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:10:52.686407 containerd[1593]: time="2025-01-30T14:10:52.685342233Z" level=info msg="Start streaming server" Jan 30 14:10:52.686407 containerd[1593]: time="2025-01-30T14:10:52.685563557Z" level=info msg="containerd successfully booted in 0.134554s" Jan 30 14:10:52.685680 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:10:52.694881 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:10:52.695170 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:10:52.705710 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:10:52.720671 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:10:52.729745 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:10:52.739824 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 14:10:52.741163 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:10:53.009538 tar[1568]: linux-arm64/LICENSE Jan 30 14:10:53.009538 tar[1568]: linux-arm64/README.md Jan 30 14:10:53.033202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:10:53.284358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:10:53.285692 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:10:53.289508 systemd[1]: Startup finished in 6.793s (kernel) + 4.614s (userspace) = 11.408s. Jan 30 14:10:53.294671 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:10:53.894083 kubelet[1698]: E0130 14:10:53.894018 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:10:53.897754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:10:53.898055 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:04.148844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:11:04.154655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:04.288594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:04.301114 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:04.357494 kubelet[1723]: E0130 14:11:04.357418 1723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:04.360442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:04.360597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:14.611650 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:11:14.619758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:14.729684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:14.735911 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:14.786801 kubelet[1744]: E0130 14:11:14.786742 1744 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:14.791041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:14.791337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:24.851769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:11:24.858739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:24.988634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:25.000146 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:25.055392 kubelet[1765]: E0130 14:11:25.055328 1765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:25.058078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:25.058522 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:35.101856 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 14:11:35.109756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:35.248117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:35.252038 (kubelet)[1785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:35.313300 kubelet[1785]: E0130 14:11:35.313206 1785 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:35.315835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:35.316058 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:37.883485 update_engine[1559]: I20250130 14:11:37.882525 1559 update_attempter.cc:509] Updating boot flags... Jan 30 14:11:37.941449 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1803) Jan 30 14:11:37.997486 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1804) Jan 30 14:11:38.057495 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1804) Jan 30 14:11:45.352036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 14:11:45.369089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:45.513622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:45.519077 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:45.563875 kubelet[1827]: E0130 14:11:45.563758 1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:45.566038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:45.566197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:11:55.601504 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 14:11:55.609755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:11:55.732765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:11:55.745172 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:11:55.795345 kubelet[1848]: E0130 14:11:55.795249 1848 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:11:55.798384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:11:55.798719 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:05.851980 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 14:12:05.860827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:05.988633 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:05.992655 (kubelet)[1869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:06.040770 kubelet[1869]: E0130 14:12:06.040699 1869 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:06.043024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:06.043169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:16.102275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 14:12:16.110772 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:16.244656 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:16.247495 (kubelet)[1890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:16.296207 kubelet[1890]: E0130 14:12:16.296157 1890 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:16.299451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:16.299683 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:26.351800 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 14:12:26.359735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:26.488646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:26.497814 (kubelet)[1911]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:26.549496 kubelet[1911]: E0130 14:12:26.549428 1911 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:26.551910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:26.552142 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:36.601292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 14:12:36.606744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:36.733675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:36.736818 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:36.777937 kubelet[1931]: E0130 14:12:36.777877 1931 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:36.780161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:36.780339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:38.780972 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:12:38.789108 systemd[1]: Started sshd@0-138.199.157.113:22-139.178.68.195:57632.service - OpenSSH per-connection server daemon (139.178.68.195:57632). Jan 30 14:12:39.788414 sshd[1940]: Accepted publickey for core from 139.178.68.195 port 57632 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:39.791525 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:39.805195 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:12:39.806727 systemd-logind[1556]: New session 1 of user core. Jan 30 14:12:39.820943 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:12:39.838773 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:12:39.845923 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:12:39.850766 (systemd)[1946]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:12:39.958504 systemd[1946]: Queued start job for default target default.target. Jan 30 14:12:39.958947 systemd[1946]: Created slice app.slice - User Application Slice. Jan 30 14:12:39.958965 systemd[1946]: Reached target paths.target - Paths. Jan 30 14:12:39.958977 systemd[1946]: Reached target timers.target - Timers. Jan 30 14:12:39.964633 systemd[1946]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:12:39.975681 systemd[1946]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:12:39.975767 systemd[1946]: Reached target sockets.target - Sockets. Jan 30 14:12:39.975786 systemd[1946]: Reached target basic.target - Basic System. Jan 30 14:12:39.975846 systemd[1946]: Reached target default.target - Main User Target. Jan 30 14:12:39.975876 systemd[1946]: Startup finished in 118ms. Jan 30 14:12:39.976835 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:12:39.986584 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:12:40.673834 systemd[1]: Started sshd@1-138.199.157.113:22-139.178.68.195:57638.service - OpenSSH per-connection server daemon (139.178.68.195:57638). Jan 30 14:12:41.638348 sshd[1958]: Accepted publickey for core from 139.178.68.195 port 57638 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:41.640644 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:41.645655 systemd-logind[1556]: New session 2 of user core. Jan 30 14:12:41.657064 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:12:42.315824 sshd[1958]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:42.320240 systemd[1]: sshd@1-138.199.157.113:22-139.178.68.195:57638.service: Deactivated successfully. Jan 30 14:12:42.324700 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:12:42.325604 systemd-logind[1556]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:12:42.326553 systemd-logind[1556]: Removed session 2. Jan 30 14:12:42.483937 systemd[1]: Started sshd@2-138.199.157.113:22-139.178.68.195:57648.service - OpenSSH per-connection server daemon (139.178.68.195:57648). Jan 30 14:12:43.471854 sshd[1966]: Accepted publickey for core from 139.178.68.195 port 57648 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:43.475189 sshd[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:43.480896 systemd-logind[1556]: New session 3 of user core. Jan 30 14:12:43.490039 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:12:44.154445 sshd[1966]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:44.159700 systemd[1]: sshd@2-138.199.157.113:22-139.178.68.195:57648.service: Deactivated successfully. Jan 30 14:12:44.163752 systemd-logind[1556]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:12:44.164305 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:12:44.165171 systemd-logind[1556]: Removed session 3. Jan 30 14:12:44.324831 systemd[1]: Started sshd@3-138.199.157.113:22-139.178.68.195:57654.service - OpenSSH per-connection server daemon (139.178.68.195:57654). Jan 30 14:12:45.300723 sshd[1974]: Accepted publickey for core from 139.178.68.195 port 57654 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:45.303392 sshd[1974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:45.308971 systemd-logind[1556]: New session 4 of user core. Jan 30 14:12:45.319000 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:12:45.984887 sshd[1974]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:45.989193 systemd[1]: sshd@3-138.199.157.113:22-139.178.68.195:57654.service: Deactivated successfully. Jan 30 14:12:45.992202 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:12:45.993287 systemd-logind[1556]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:12:45.994665 systemd-logind[1556]: Removed session 4. Jan 30 14:12:46.155917 systemd[1]: Started sshd@4-138.199.157.113:22-139.178.68.195:51464.service - OpenSSH per-connection server daemon (139.178.68.195:51464). Jan 30 14:12:46.851382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 14:12:46.866730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:46.997937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:47.010145 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:47.063733 kubelet[1996]: E0130 14:12:47.063656 1996 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:47.068896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:47.069212 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:47.152382 sshd[1982]: Accepted publickey for core from 139.178.68.195 port 51464 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:47.154125 sshd[1982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:47.160779 systemd-logind[1556]: New session 5 of user core. Jan 30 14:12:47.170967 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:12:47.692147 sudo[2007]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:12:47.692534 sudo[2007]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:12:47.707364 sudo[2007]: pam_unix(sudo:session): session closed for user root Jan 30 14:12:47.869853 sshd[1982]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:47.876236 systemd[1]: sshd@4-138.199.157.113:22-139.178.68.195:51464.service: Deactivated successfully. Jan 30 14:12:47.880235 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:12:47.881277 systemd-logind[1556]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:12:47.882666 systemd-logind[1556]: Removed session 5. Jan 30 14:12:48.039810 systemd[1]: Started sshd@5-138.199.157.113:22-139.178.68.195:51474.service - OpenSSH per-connection server daemon (139.178.68.195:51474). Jan 30 14:12:49.030826 sshd[2012]: Accepted publickey for core from 139.178.68.195 port 51474 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:49.033479 sshd[2012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:49.038219 systemd-logind[1556]: New session 6 of user core. Jan 30 14:12:49.049085 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:12:49.558577 sudo[2017]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:12:49.559252 sudo[2017]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:12:49.564483 sudo[2017]: pam_unix(sudo:session): session closed for user root Jan 30 14:12:49.570548 sudo[2016]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:12:49.570853 sudo[2016]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:12:49.590000 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:12:49.593486 auditctl[2020]: No rules Jan 30 14:12:49.593843 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:12:49.594094 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:12:49.602068 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:12:49.629144 augenrules[2039]: No rules Jan 30 14:12:49.630958 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:12:49.632949 sudo[2016]: pam_unix(sudo:session): session closed for user root Jan 30 14:12:49.795759 sshd[2012]: pam_unix(sshd:session): session closed for user core Jan 30 14:12:49.800551 systemd-logind[1556]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:12:49.801753 systemd[1]: sshd@5-138.199.157.113:22-139.178.68.195:51474.service: Deactivated successfully. Jan 30 14:12:49.804372 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:12:49.805200 systemd-logind[1556]: Removed session 6. Jan 30 14:12:49.959804 systemd[1]: Started sshd@6-138.199.157.113:22-139.178.68.195:51482.service - OpenSSH per-connection server daemon (139.178.68.195:51482). Jan 30 14:12:50.940157 sshd[2048]: Accepted publickey for core from 139.178.68.195 port 51482 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:12:50.942638 sshd[2048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:12:50.947915 systemd-logind[1556]: New session 7 of user core. Jan 30 14:12:50.967994 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:12:51.467746 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:12:51.468036 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:12:51.788031 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:12:51.788370 (dockerd)[2067]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:12:52.043617 dockerd[2067]: time="2025-01-30T14:12:52.043447106Z" level=info msg="Starting up" Jan 30 14:12:52.125157 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3620686876-merged.mount: Deactivated successfully. Jan 30 14:12:52.150480 dockerd[2067]: time="2025-01-30T14:12:52.150192997Z" level=info msg="Loading containers: start." Jan 30 14:12:52.247726 kernel: Initializing XFRM netlink socket Jan 30 14:12:52.323503 systemd-networkd[1241]: docker0: Link UP Jan 30 14:12:52.338063 dockerd[2067]: time="2025-01-30T14:12:52.337986910Z" level=info msg="Loading containers: done." Jan 30 14:12:52.360824 dockerd[2067]: time="2025-01-30T14:12:52.360767080Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:12:52.361042 dockerd[2067]: time="2025-01-30T14:12:52.360903561Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:12:52.361083 dockerd[2067]: time="2025-01-30T14:12:52.361050882Z" level=info msg="Daemon has completed initialization" Jan 30 14:12:52.395685 dockerd[2067]: time="2025-01-30T14:12:52.395348878Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:12:52.396034 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:12:53.570289 containerd[1593]: time="2025-01-30T14:12:53.569957341Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:12:54.196942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656846439.mount: Deactivated successfully. Jan 30 14:12:55.130136 containerd[1593]: time="2025-01-30T14:12:55.130062530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:55.132341 containerd[1593]: time="2025-01-30T14:12:55.132027980Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 30 14:12:55.135229 containerd[1593]: time="2025-01-30T14:12:55.135157796Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:55.143268 containerd[1593]: time="2025-01-30T14:12:55.141451469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:55.144003 containerd[1593]: time="2025-01-30T14:12:55.143950002Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.573938941s" Jan 30 14:12:55.144150 containerd[1593]: time="2025-01-30T14:12:55.144123643Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 14:12:55.169874 containerd[1593]: time="2025-01-30T14:12:55.169832137Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:12:56.420705 containerd[1593]: time="2025-01-30T14:12:56.420649737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:56.423501 containerd[1593]: time="2025-01-30T14:12:56.423458191Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 30 14:12:56.425595 containerd[1593]: time="2025-01-30T14:12:56.425481482Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:56.429781 containerd[1593]: time="2025-01-30T14:12:56.429690583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:56.431643 containerd[1593]: time="2025-01-30T14:12:56.431282191Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.261403494s" Jan 30 14:12:56.431643 containerd[1593]: time="2025-01-30T14:12:56.431333231Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 14:12:56.454204 containerd[1593]: time="2025-01-30T14:12:56.453974625Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:12:57.101351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 14:12:57.108595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:12:57.237735 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:12:57.243344 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:12:57.287881 kubelet[2292]: E0130 14:12:57.287751 2292 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:12:57.292270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:12:57.293255 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:12:57.465253 containerd[1593]: time="2025-01-30T14:12:57.465072691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:57.467142 containerd[1593]: time="2025-01-30T14:12:57.467086941Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 30 14:12:57.468499 containerd[1593]: time="2025-01-30T14:12:57.468460468Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:57.472382 containerd[1593]: time="2025-01-30T14:12:57.472314247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:57.473365 containerd[1593]: time="2025-01-30T14:12:57.473323212Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.019310707s" Jan 30 14:12:57.473365 containerd[1593]: time="2025-01-30T14:12:57.473362932Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 14:12:57.500647 containerd[1593]: time="2025-01-30T14:12:57.500517305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:12:58.460845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714443316.mount: Deactivated successfully. Jan 30 14:12:58.804882 containerd[1593]: time="2025-01-30T14:12:58.804726716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:58.807021 containerd[1593]: time="2025-01-30T14:12:58.806984247Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 30 14:12:58.811165 containerd[1593]: time="2025-01-30T14:12:58.810937106Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:58.814428 containerd[1593]: time="2025-01-30T14:12:58.814341842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:58.816088 containerd[1593]: time="2025-01-30T14:12:58.815358727Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.314792382s" Jan 30 14:12:58.816088 containerd[1593]: time="2025-01-30T14:12:58.815424887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 14:12:58.838092 containerd[1593]: time="2025-01-30T14:12:58.838055074Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:12:59.409810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243895039.mount: Deactivated successfully. Jan 30 14:12:59.989974 containerd[1593]: time="2025-01-30T14:12:59.989900585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.991530 containerd[1593]: time="2025-01-30T14:12:59.991441112Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 30 14:12:59.992990 containerd[1593]: time="2025-01-30T14:12:59.992496197Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.997075 containerd[1593]: time="2025-01-30T14:12:59.996992737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:12:59.998706 containerd[1593]: time="2025-01-30T14:12:59.998577424Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.160230228s" Jan 30 14:12:59.998894 containerd[1593]: time="2025-01-30T14:12:59.998825666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:13:00.021983 containerd[1593]: time="2025-01-30T14:13:00.021913049Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:13:00.557597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807286792.mount: Deactivated successfully. Jan 30 14:13:00.566981 containerd[1593]: time="2025-01-30T14:13:00.565916428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:00.568420 containerd[1593]: time="2025-01-30T14:13:00.568369839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 30 14:13:00.569610 containerd[1593]: time="2025-01-30T14:13:00.569560725Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:00.573583 containerd[1593]: time="2025-01-30T14:13:00.573545742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:00.574603 containerd[1593]: time="2025-01-30T14:13:00.574568907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 552.612898ms" Jan 30 14:13:00.574722 containerd[1593]: time="2025-01-30T14:13:00.574606067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 14:13:00.598494 containerd[1593]: time="2025-01-30T14:13:00.598455693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:13:01.133781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859990555.mount: Deactivated successfully. Jan 30 14:13:02.557431 containerd[1593]: time="2025-01-30T14:13:02.555911033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:02.557431 containerd[1593]: time="2025-01-30T14:13:02.556920917Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 30 14:13:02.558345 containerd[1593]: time="2025-01-30T14:13:02.558308443Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:02.563739 containerd[1593]: time="2025-01-30T14:13:02.563692426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:02.565142 containerd[1593]: time="2025-01-30T14:13:02.565099311Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.966604258s" Jan 30 14:13:02.565142 containerd[1593]: time="2025-01-30T14:13:02.565141952Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 14:13:06.870661 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:06.883997 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:06.912797 systemd[1]: Reloading requested from client PID 2485 ('systemctl') (unit session-7.scope)... Jan 30 14:13:06.912955 systemd[1]: Reloading... Jan 30 14:13:07.034433 zram_generator::config[2529]: No configuration found. Jan 30 14:13:07.144407 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:13:07.208711 systemd[1]: Reloading finished in 295 ms. Jan 30 14:13:07.262518 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:13:07.262605 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:13:07.262913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:07.266529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:07.385603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:07.399162 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:13:07.445515 kubelet[2585]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:07.445893 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:13:07.445945 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:07.446154 kubelet[2585]: I0130 14:13:07.446124 2585 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:13:07.939491 kubelet[2585]: I0130 14:13:07.939435 2585 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:13:07.939709 kubelet[2585]: I0130 14:13:07.939697 2585 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:13:07.940011 kubelet[2585]: I0130 14:13:07.939994 2585 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:13:07.959480 kubelet[2585]: I0130 14:13:07.959297 2585 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:13:07.960044 kubelet[2585]: E0130 14:13:07.959810 2585 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.157.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:07.969987 kubelet[2585]: I0130 14:13:07.969957 2585 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:13:07.972010 kubelet[2585]: I0130 14:13:07.971922 2585 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:13:07.972250 kubelet[2585]: I0130 14:13:07.972003 2585 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-0-5370901337","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:13:07.972365 kubelet[2585]: I0130 14:13:07.972352 2585 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:13:07.972419 kubelet[2585]: I0130 14:13:07.972367 2585 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:13:07.972717 kubelet[2585]: I0130 14:13:07.972688 2585 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:07.973994 kubelet[2585]: I0130 14:13:07.973947 2585 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:13:07.973994 kubelet[2585]: I0130 14:13:07.973972 2585 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:13:07.974148 kubelet[2585]: I0130 14:13:07.974125 2585 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:13:07.975481 kubelet[2585]: I0130 14:13:07.974203 2585 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:13:07.975847 kubelet[2585]: W0130 14:13:07.975686 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.157.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:07.975847 kubelet[2585]: E0130 14:13:07.975749 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.157.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:07.975847 kubelet[2585]: W0130 14:13:07.975806 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.157.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-5370901337&limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:07.975847 kubelet[2585]: E0130 14:13:07.975833 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.157.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-5370901337&limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:07.978298 kubelet[2585]: I0130 14:13:07.976503 2585 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:13:07.978298 kubelet[2585]: I0130 14:13:07.976977 2585 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:13:07.978298 kubelet[2585]: W0130 14:13:07.977085 2585 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:13:07.978806 kubelet[2585]: I0130 14:13:07.978791 2585 server.go:1264] "Started kubelet" Jan 30 14:13:07.984353 kubelet[2585]: I0130 14:13:07.984296 2585 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:13:07.986576 kubelet[2585]: I0130 14:13:07.986535 2585 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:13:07.988425 kubelet[2585]: I0130 14:13:07.987831 2585 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:13:07.988425 kubelet[2585]: I0130 14:13:07.988128 2585 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:13:07.989760 kubelet[2585]: E0130 14:13:07.989538 2585 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.157.113:6443/api/v1/namespaces/default/events\": dial tcp 138.199.157.113:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-0-5370901337.181f7de1579478fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-0-5370901337,UID:ci-4081-3-0-0-5370901337,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-0-5370901337,},FirstTimestamp:2025-01-30 14:13:07.978762492 +0000 UTC m=+0.573070889,LastTimestamp:2025-01-30 14:13:07.978762492 +0000 UTC m=+0.573070889,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-0-5370901337,}" Jan 30 14:13:07.996491 kubelet[2585]: I0130 14:13:07.996462 2585 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:13:07.996840 kubelet[2585]: I0130 14:13:07.996823 2585 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:13:07.999359 kubelet[2585]: I0130 14:13:07.999336 2585 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:13:07.999879 kubelet[2585]: I0130 14:13:07.999776 2585 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:13:08.000524 kubelet[2585]: W0130 14:13:08.000425 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.157.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:08.000524 kubelet[2585]: E0130 14:13:08.000480 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.157.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:08.001720 kubelet[2585]: E0130 14:13:08.001555 2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.157.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-5370901337?timeout=10s\": dial tcp 138.199.157.113:6443: connect: connection refused" interval="200ms" Jan 30 14:13:08.001876 kubelet[2585]: E0130 14:13:08.001858 2585 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:13:08.002638 kubelet[2585]: I0130 14:13:08.002606 2585 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:13:08.003051 kubelet[2585]: I0130 14:13:08.002792 2585 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:13:08.004422 kubelet[2585]: I0130 14:13:08.004340 2585 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:13:08.013444 kubelet[2585]: I0130 14:13:08.011787 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:13:08.013444 kubelet[2585]: I0130 14:13:08.012807 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:13:08.013444 kubelet[2585]: I0130 14:13:08.013023 2585 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:13:08.013444 kubelet[2585]: I0130 14:13:08.013042 2585 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:13:08.013444 kubelet[2585]: E0130 14:13:08.013083 2585 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:13:08.023836 kubelet[2585]: W0130 14:13:08.023775 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.157.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:08.023836 kubelet[2585]: E0130 14:13:08.023837 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.157.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:08.042841 kubelet[2585]: I0130 14:13:08.042812 2585 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:13:08.043075 kubelet[2585]: I0130 14:13:08.043035 2585 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:13:08.043179 kubelet[2585]: I0130 14:13:08.043169 2585 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:08.048226 kubelet[2585]: I0130 14:13:08.048186 2585 policy_none.go:49] "None policy: Start" Jan 30 14:13:08.049782 kubelet[2585]: I0130 14:13:08.049739 2585 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:13:08.049782 kubelet[2585]: I0130 14:13:08.049784 2585 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:13:08.058524 kubelet[2585]: I0130 14:13:08.058298 2585 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:13:08.058728 kubelet[2585]: I0130 14:13:08.058552 2585 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:13:08.058728 kubelet[2585]: I0130 14:13:08.058669 2585 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:13:08.063772 kubelet[2585]: E0130 14:13:08.063701 2585 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-0-5370901337\" not found" Jan 30 14:13:08.104072 kubelet[2585]: I0130 14:13:08.104039 2585 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:08.104501 kubelet[2585]: E0130 14:13:08.104473 2585 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.157.113:6443/api/v1/nodes\": dial tcp 138.199.157.113:6443: connect: connection refused" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:08.113731 kubelet[2585]: I0130 14:13:08.113605 2585 topology_manager.go:215] "Topology Admit Handler" podUID="f7c9a18c8671eac840e01d30ae31777c" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.116580 kubelet[2585]: I0130 14:13:08.116541 2585 topology_manager.go:215] "Topology Admit Handler" podUID="35e478a252a2d21487eafd83b8353441" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.118653 kubelet[2585]: I0130 14:13:08.118613 2585 topology_manager.go:215] "Topology Admit Handler" podUID="4eccfde4488f26db3b465dcb5678ebfc" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202138 kubelet[2585]: I0130 14:13:08.201340 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7c9a18c8671eac840e01d30ae31777c-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-0-5370901337\" (UID: \"f7c9a18c8671eac840e01d30ae31777c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202138 kubelet[2585]: I0130 14:13:08.201453 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7c9a18c8671eac840e01d30ae31777c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-0-5370901337\" (UID: \"f7c9a18c8671eac840e01d30ae31777c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202138 kubelet[2585]: I0130 14:13:08.201498 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7c9a18c8671eac840e01d30ae31777c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-0-5370901337\" (UID: \"f7c9a18c8671eac840e01d30ae31777c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202138 kubelet[2585]: I0130 14:13:08.201545 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202138 kubelet[2585]: I0130 14:13:08.201584 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202420 kubelet[2585]: I0130 14:13:08.201619 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202420 kubelet[2585]: I0130 14:13:08.201697 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202420 kubelet[2585]: I0130 14:13:08.201733 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.202420 kubelet[2585]: I0130 14:13:08.201789 2585 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4eccfde4488f26db3b465dcb5678ebfc-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-0-5370901337\" (UID: \"4eccfde4488f26db3b465dcb5678ebfc\") " pod="kube-system/kube-scheduler-ci-4081-3-0-0-5370901337" Jan 30 14:13:08.203447 kubelet[2585]: E0130 14:13:08.203242 2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.157.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-5370901337?timeout=10s\": dial tcp 138.199.157.113:6443: connect: connection refused" interval="400ms" Jan 30 14:13:08.309306 kubelet[2585]: I0130 14:13:08.309269 2585 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:08.312818 kubelet[2585]: E0130 14:13:08.312780 2585 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.157.113:6443/api/v1/nodes\": dial tcp 138.199.157.113:6443: connect: connection refused" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:08.423651 containerd[1593]: time="2025-01-30T14:13:08.423564035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-0-5370901337,Uid:35e478a252a2d21487eafd83b8353441,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:08.424555 containerd[1593]: time="2025-01-30T14:13:08.424495638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-0-5370901337,Uid:f7c9a18c8671eac840e01d30ae31777c,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:08.428715 containerd[1593]: time="2025-01-30T14:13:08.428338291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-0-5370901337,Uid:4eccfde4488f26db3b465dcb5678ebfc,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:08.604189 kubelet[2585]: E0130 14:13:08.604110 2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.157.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-5370901337?timeout=10s\": dial tcp 138.199.157.113:6443: connect: connection refused" interval="800ms" Jan 30 14:13:08.717599 kubelet[2585]: I0130 14:13:08.717563 2585 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:08.718478 kubelet[2585]: E0130 14:13:08.718360 2585 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.157.113:6443/api/v1/nodes\": dial tcp 138.199.157.113:6443: connect: connection refused" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:08.823767 kubelet[2585]: W0130 14:13:08.823654 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.157.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:08.823767 kubelet[2585]: E0130 14:13:08.823764 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.157.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:08.979130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657380511.mount: Deactivated successfully. Jan 30 14:13:08.985604 containerd[1593]: time="2025-01-30T14:13:08.985528901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:08.988887 containerd[1593]: time="2025-01-30T14:13:08.988838633Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 30 14:13:08.990224 containerd[1593]: time="2025-01-30T14:13:08.990148797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:08.992200 containerd[1593]: time="2025-01-30T14:13:08.992160204Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:13:08.993452 containerd[1593]: time="2025-01-30T14:13:08.993303328Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:13:08.993452 containerd[1593]: time="2025-01-30T14:13:08.993380328Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:08.996366 containerd[1593]: time="2025-01-30T14:13:08.996309939Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:08.998436 containerd[1593]: time="2025-01-30T14:13:08.998104425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 574.44003ms" Jan 30 14:13:09.000030 containerd[1593]: time="2025-01-30T14:13:08.999990431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.409313ms" Jan 30 14:13:09.000231 containerd[1593]: time="2025-01-30T14:13:09.000205112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:13:09.002041 containerd[1593]: time="2025-01-30T14:13:09.001992678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 573.526186ms" Jan 30 14:13:09.105471 containerd[1593]: time="2025-01-30T14:13:09.105120344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:09.105909 containerd[1593]: time="2025-01-30T14:13:09.105697226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:09.105909 containerd[1593]: time="2025-01-30T14:13:09.105723626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.106783 containerd[1593]: time="2025-01-30T14:13:09.106581029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.110344 containerd[1593]: time="2025-01-30T14:13:09.109846840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:09.110344 containerd[1593]: time="2025-01-30T14:13:09.110148801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:09.110344 containerd[1593]: time="2025-01-30T14:13:09.110165161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.110344 containerd[1593]: time="2025-01-30T14:13:09.110270122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.113009 containerd[1593]: time="2025-01-30T14:13:09.112735130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:09.113009 containerd[1593]: time="2025-01-30T14:13:09.112792210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:09.113009 containerd[1593]: time="2025-01-30T14:13:09.112820810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.113009 containerd[1593]: time="2025-01-30T14:13:09.112917011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:09.176443 containerd[1593]: time="2025-01-30T14:13:09.176316943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-0-5370901337,Uid:35e478a252a2d21487eafd83b8353441,Namespace:kube-system,Attempt:0,} returns sandbox id \"8af3a61e9ec338277ccc20f71afd5a4dd8dbfe7541eb22bb9b2a3fcd62945d06\"" Jan 30 14:13:09.189084 containerd[1593]: time="2025-01-30T14:13:09.188899786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-0-5370901337,Uid:f7c9a18c8671eac840e01d30ae31777c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbcbcdccdcf0d83d95da5e2961d47c70552492025f43258fbc34da2c1e3c55ee\"" Jan 30 14:13:09.194912 containerd[1593]: time="2025-01-30T14:13:09.194275644Z" level=info msg="CreateContainer within sandbox \"8af3a61e9ec338277ccc20f71afd5a4dd8dbfe7541eb22bb9b2a3fcd62945d06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:13:09.196114 kubelet[2585]: W0130 14:13:09.196059 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.157.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:09.196242 kubelet[2585]: E0130 14:13:09.196231 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.157.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:09.197663 containerd[1593]: time="2025-01-30T14:13:09.197596535Z" level=info msg="CreateContainer within sandbox \"bbcbcdccdcf0d83d95da5e2961d47c70552492025f43258fbc34da2c1e3c55ee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:13:09.203840 containerd[1593]: time="2025-01-30T14:13:09.203800596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-0-5370901337,Uid:4eccfde4488f26db3b465dcb5678ebfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7ec9a8927de353bffb61a277851116ea0edd713afd7f7c47560c7f09cf1c7b7\"" Jan 30 14:13:09.207517 containerd[1593]: time="2025-01-30T14:13:09.207478688Z" level=info msg="CreateContainer within sandbox \"c7ec9a8927de353bffb61a277851116ea0edd713afd7f7c47560c7f09cf1c7b7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:13:09.224906 containerd[1593]: time="2025-01-30T14:13:09.224840986Z" level=info msg="CreateContainer within sandbox \"bbcbcdccdcf0d83d95da5e2961d47c70552492025f43258fbc34da2c1e3c55ee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"896bd0200c82819cc32e7e06f6db8647b8c7c87777240cc35690c6b205e045d3\"" Jan 30 14:13:09.225632 containerd[1593]: time="2025-01-30T14:13:09.225596349Z" level=info msg="StartContainer for \"896bd0200c82819cc32e7e06f6db8647b8c7c87777240cc35690c6b205e045d3\"" Jan 30 14:13:09.227612 containerd[1593]: time="2025-01-30T14:13:09.227571755Z" level=info msg="CreateContainer within sandbox \"8af3a61e9ec338277ccc20f71afd5a4dd8dbfe7541eb22bb9b2a3fcd62945d06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6e8371d0f39164521f5751706d39459d286c0648dfbe7eafad63848569efaae3\"" Jan 30 14:13:09.229547 containerd[1593]: time="2025-01-30T14:13:09.228281878Z" level=info msg="StartContainer for \"6e8371d0f39164521f5751706d39459d286c0648dfbe7eafad63848569efaae3\"" Jan 30 14:13:09.230952 kubelet[2585]: W0130 14:13:09.230898 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.157.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-5370901337&limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:09.231127 kubelet[2585]: E0130 14:13:09.231107 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.157.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-0-5370901337&limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:09.232618 containerd[1593]: time="2025-01-30T14:13:09.232580252Z" level=info msg="CreateContainer within sandbox \"c7ec9a8927de353bffb61a277851116ea0edd713afd7f7c47560c7f09cf1c7b7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"62fe3c5efe8fdffbf339cc4a243abc5d5671808ddcc42d8b17bdd60c26134926\"" Jan 30 14:13:09.234524 containerd[1593]: time="2025-01-30T14:13:09.234494579Z" level=info msg="StartContainer for \"62fe3c5efe8fdffbf339cc4a243abc5d5671808ddcc42d8b17bdd60c26134926\"" Jan 30 14:13:09.314099 containerd[1593]: time="2025-01-30T14:13:09.313945925Z" level=info msg="StartContainer for \"896bd0200c82819cc32e7e06f6db8647b8c7c87777240cc35690c6b205e045d3\" returns successfully" Jan 30 14:13:09.336864 containerd[1593]: time="2025-01-30T14:13:09.336524521Z" level=info msg="StartContainer for \"6e8371d0f39164521f5751706d39459d286c0648dfbe7eafad63848569efaae3\" returns successfully" Jan 30 14:13:09.360815 containerd[1593]: time="2025-01-30T14:13:09.360758003Z" level=info msg="StartContainer for \"62fe3c5efe8fdffbf339cc4a243abc5d5671808ddcc42d8b17bdd60c26134926\" returns successfully" Jan 30 14:13:09.394132 kubelet[2585]: W0130 14:13:09.394039 2585 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.157.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:09.394132 kubelet[2585]: E0130 14:13:09.394108 2585 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.157.113:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.157.113:6443: connect: connection refused Jan 30 14:13:09.405481 kubelet[2585]: E0130 14:13:09.404748 2585 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.157.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-0-5370901337?timeout=10s\": dial tcp 138.199.157.113:6443: connect: connection refused" interval="1.6s" Jan 30 14:13:09.524792 kubelet[2585]: I0130 14:13:09.522581 2585 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:12.119098 kubelet[2585]: E0130 14:13:12.119057 2585 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-0-5370901337\" not found" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:12.245934 kubelet[2585]: I0130 14:13:12.245887 2585 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:12.989603 kubelet[2585]: I0130 14:13:12.989548 2585 apiserver.go:52] "Watching apiserver" Jan 30 14:13:13.000347 kubelet[2585]: I0130 14:13:13.000304 2585 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:13:14.343515 systemd[1]: Reloading requested from client PID 2857 ('systemctl') (unit session-7.scope)... Jan 30 14:13:14.343530 systemd[1]: Reloading... Jan 30 14:13:14.482423 zram_generator::config[2897]: No configuration found. Jan 30 14:13:14.603172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:13:14.678730 systemd[1]: Reloading finished in 334 ms. Jan 30 14:13:14.715314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:14.730268 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:13:14.731850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:14.741108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:13:14.849162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:13:14.866189 (kubelet)[2952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:13:14.935988 kubelet[2952]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:14.937143 kubelet[2952]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:13:14.937143 kubelet[2952]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:13:14.937143 kubelet[2952]: I0130 14:13:14.936544 2952 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:13:14.941962 kubelet[2952]: I0130 14:13:14.941916 2952 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:13:14.942130 kubelet[2952]: I0130 14:13:14.942118 2952 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:13:14.942453 kubelet[2952]: I0130 14:13:14.942386 2952 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:13:14.944104 kubelet[2952]: I0130 14:13:14.944073 2952 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:13:14.952423 kubelet[2952]: I0130 14:13:14.952340 2952 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:13:14.959126 kubelet[2952]: I0130 14:13:14.958260 2952 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:13:14.959126 kubelet[2952]: I0130 14:13:14.958715 2952 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:13:14.959126 kubelet[2952]: I0130 14:13:14.958745 2952 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-0-5370901337","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:13:14.959126 kubelet[2952]: I0130 14:13:14.958942 2952 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:13:14.959379 kubelet[2952]: I0130 14:13:14.958952 2952 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:13:14.959379 kubelet[2952]: I0130 14:13:14.958986 2952 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:14.959491 kubelet[2952]: I0130 14:13:14.959475 2952 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:13:14.959562 kubelet[2952]: I0130 14:13:14.959553 2952 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:13:14.959722 kubelet[2952]: I0130 14:13:14.959707 2952 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:13:14.959805 kubelet[2952]: I0130 14:13:14.959795 2952 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:13:14.965685 kubelet[2952]: I0130 14:13:14.965654 2952 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:13:14.966050 kubelet[2952]: I0130 14:13:14.966035 2952 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:13:14.966570 kubelet[2952]: I0130 14:13:14.966552 2952 server.go:1264] "Started kubelet" Jan 30 14:13:14.968779 kubelet[2952]: I0130 14:13:14.968758 2952 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:13:14.975143 kubelet[2952]: I0130 14:13:14.975093 2952 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:13:14.976477 kubelet[2952]: I0130 14:13:14.976386 2952 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:13:14.977693 kubelet[2952]: I0130 14:13:14.977584 2952 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:13:14.978472 kubelet[2952]: I0130 14:13:14.977971 2952 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:13:14.981428 kubelet[2952]: I0130 14:13:14.980968 2952 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:13:14.983099 kubelet[2952]: I0130 14:13:14.983078 2952 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:13:14.983330 kubelet[2952]: I0130 14:13:14.983317 2952 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:13:14.985134 kubelet[2952]: I0130 14:13:14.985100 2952 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:13:14.986167 kubelet[2952]: I0130 14:13:14.986148 2952 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:13:14.986264 kubelet[2952]: I0130 14:13:14.986255 2952 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:13:14.986350 kubelet[2952]: I0130 14:13:14.986341 2952 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:13:14.986466 kubelet[2952]: E0130 14:13:14.986449 2952 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:13:15.007185 kubelet[2952]: I0130 14:13:15.007136 2952 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:13:15.007548 kubelet[2952]: I0130 14:13:15.007281 2952 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:13:15.010937 kubelet[2952]: E0130 14:13:15.010905 2952 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:13:15.012436 kubelet[2952]: I0130 14:13:15.012344 2952 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:13:15.073879 kubelet[2952]: I0130 14:13:15.073766 2952 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:13:15.073879 kubelet[2952]: I0130 14:13:15.073786 2952 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:13:15.073879 kubelet[2952]: I0130 14:13:15.073808 2952 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:13:15.074322 kubelet[2952]: I0130 14:13:15.073961 2952 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:13:15.074322 kubelet[2952]: I0130 14:13:15.073980 2952 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:13:15.074322 kubelet[2952]: I0130 14:13:15.073999 2952 policy_none.go:49] "None policy: Start" Jan 30 14:13:15.075287 kubelet[2952]: I0130 14:13:15.075044 2952 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:13:15.075287 kubelet[2952]: I0130 14:13:15.075069 2952 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:13:15.075287 kubelet[2952]: I0130 14:13:15.075212 2952 state_mem.go:75] "Updated machine memory state" Jan 30 14:13:15.076585 kubelet[2952]: I0130 14:13:15.076560 2952 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:13:15.076902 kubelet[2952]: I0130 14:13:15.076863 2952 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:13:15.077043 kubelet[2952]: I0130 14:13:15.077031 2952 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:13:15.084427 kubelet[2952]: I0130 14:13:15.084335 2952 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:15.087567 kubelet[2952]: I0130 14:13:15.086562 2952 topology_manager.go:215] "Topology Admit Handler" podUID="f7c9a18c8671eac840e01d30ae31777c" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.087567 kubelet[2952]: I0130 14:13:15.086704 2952 topology_manager.go:215] "Topology Admit Handler" podUID="35e478a252a2d21487eafd83b8353441" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.087567 kubelet[2952]: I0130 14:13:15.086747 2952 topology_manager.go:215] "Topology Admit Handler" podUID="4eccfde4488f26db3b465dcb5678ebfc" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.104278 kubelet[2952]: I0130 14:13:15.104241 2952 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:15.104645 kubelet[2952]: I0130 14:13:15.104579 2952 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-0-5370901337" Jan 30 14:13:15.184551 kubelet[2952]: I0130 14:13:15.184306 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f7c9a18c8671eac840e01d30ae31777c-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-0-5370901337\" (UID: \"f7c9a18c8671eac840e01d30ae31777c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.184551 kubelet[2952]: I0130 14:13:15.184365 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f7c9a18c8671eac840e01d30ae31777c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-0-5370901337\" (UID: \"f7c9a18c8671eac840e01d30ae31777c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.185747 kubelet[2952]: I0130 14:13:15.185031 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.185747 kubelet[2952]: I0130 14:13:15.185200 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.185747 kubelet[2952]: I0130 14:13:15.185269 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4eccfde4488f26db3b465dcb5678ebfc-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-0-5370901337\" (UID: \"4eccfde4488f26db3b465dcb5678ebfc\") " pod="kube-system/kube-scheduler-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.185747 kubelet[2952]: I0130 14:13:15.185304 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f7c9a18c8671eac840e01d30ae31777c-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-0-5370901337\" (UID: \"f7c9a18c8671eac840e01d30ae31777c\") " pod="kube-system/kube-apiserver-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.185747 kubelet[2952]: I0130 14:13:15.185455 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.186220 kubelet[2952]: I0130 14:13:15.185519 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.186220 kubelet[2952]: I0130 14:13:15.185556 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/35e478a252a2d21487eafd83b8353441-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-0-5370901337\" (UID: \"35e478a252a2d21487eafd83b8353441\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" Jan 30 14:13:15.334574 sudo[2984]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 14:13:15.334965 sudo[2984]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 14:13:15.766367 sudo[2984]: pam_unix(sudo:session): session closed for user root Jan 30 14:13:15.963441 kubelet[2952]: I0130 14:13:15.962998 2952 apiserver.go:52] "Watching apiserver" Jan 30 14:13:15.984383 kubelet[2952]: I0130 14:13:15.983950 2952 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:13:16.003442 kubelet[2952]: I0130 14:13:16.002170 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-0-5370901337" podStartSLOduration=1.002143925 podStartE2EDuration="1.002143925s" podCreationTimestamp="2025-01-30 14:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:15.99674603 +0000 UTC m=+1.126452151" watchObservedRunningTime="2025-01-30 14:13:16.002143925 +0000 UTC m=+1.131850086" Jan 30 14:13:16.027940 kubelet[2952]: I0130 14:13:16.027490 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-0-5370901337" podStartSLOduration=1.027462753 podStartE2EDuration="1.027462753s" podCreationTimestamp="2025-01-30 14:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:16.01530584 +0000 UTC m=+1.145011921" watchObservedRunningTime="2025-01-30 14:13:16.027462753 +0000 UTC m=+1.157168914" Jan 30 14:13:16.042602 kubelet[2952]: I0130 14:13:16.042150 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-0-5370901337" podStartSLOduration=1.042132553 podStartE2EDuration="1.042132553s" podCreationTimestamp="2025-01-30 14:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:16.028764476 +0000 UTC m=+1.158470597" watchObservedRunningTime="2025-01-30 14:13:16.042132553 +0000 UTC m=+1.171838674" Jan 30 14:13:17.897907 sudo[2052]: pam_unix(sudo:session): session closed for user root Jan 30 14:13:18.058313 sshd[2048]: pam_unix(sshd:session): session closed for user core Jan 30 14:13:18.063940 systemd[1]: sshd@6-138.199.157.113:22-139.178.68.195:51482.service: Deactivated successfully. Jan 30 14:13:18.069262 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:13:18.072390 systemd-logind[1556]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:13:18.074304 systemd-logind[1556]: Removed session 7. Jan 30 14:13:30.783012 kubelet[2952]: I0130 14:13:30.782981 2952 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:13:30.785823 containerd[1593]: time="2025-01-30T14:13:30.784109690Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:13:30.786139 kubelet[2952]: I0130 14:13:30.784302 2952 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:13:30.847390 kubelet[2952]: I0130 14:13:30.847336 2952 topology_manager.go:215] "Topology Admit Handler" podUID="f3184743-144f-4ee2-a12a-411c429df7ec" podNamespace="kube-system" podName="cilium-2n4l9" Jan 30 14:13:30.847687 kubelet[2952]: I0130 14:13:30.847665 2952 topology_manager.go:215] "Topology Admit Handler" podUID="5033eabf-1e7c-4693-a8ac-0ef6e588daa6" podNamespace="kube-system" podName="kube-proxy-qr7qq" Jan 30 14:13:30.953654 kubelet[2952]: I0130 14:13:30.953607 2952 topology_manager.go:215] "Topology Admit Handler" podUID="9f4010f9-c023-47d3-8a89-49c973adb9ba" podNamespace="kube-system" podName="cilium-operator-599987898-l4jkn" Jan 30 14:13:30.983742 kubelet[2952]: I0130 14:13:30.982172 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-lib-modules\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.983742 kubelet[2952]: I0130 14:13:30.982213 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-config-path\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.983742 kubelet[2952]: I0130 14:13:30.982235 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3184743-144f-4ee2-a12a-411c429df7ec-hubble-tls\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.983742 kubelet[2952]: I0130 14:13:30.982253 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-xtables-lock\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.983742 kubelet[2952]: I0130 14:13:30.982273 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5033eabf-1e7c-4693-a8ac-0ef6e588daa6-lib-modules\") pod \"kube-proxy-qr7qq\" (UID: \"5033eabf-1e7c-4693-a8ac-0ef6e588daa6\") " pod="kube-system/kube-proxy-qr7qq" Jan 30 14:13:30.983742 kubelet[2952]: I0130 14:13:30.982291 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3184743-144f-4ee2-a12a-411c429df7ec-clustermesh-secrets\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984337 kubelet[2952]: I0130 14:13:30.982307 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vplw8\" (UniqueName: \"kubernetes.io/projected/f3184743-144f-4ee2-a12a-411c429df7ec-kube-api-access-vplw8\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984337 kubelet[2952]: I0130 14:13:30.982324 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5033eabf-1e7c-4693-a8ac-0ef6e588daa6-kube-proxy\") pod \"kube-proxy-qr7qq\" (UID: \"5033eabf-1e7c-4693-a8ac-0ef6e588daa6\") " pod="kube-system/kube-proxy-qr7qq" Jan 30 14:13:30.984337 kubelet[2952]: I0130 14:13:30.982341 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-bpf-maps\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984337 kubelet[2952]: I0130 14:13:30.982369 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-host-proc-sys-net\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984337 kubelet[2952]: I0130 14:13:30.982387 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5033eabf-1e7c-4693-a8ac-0ef6e588daa6-xtables-lock\") pod \"kube-proxy-qr7qq\" (UID: \"5033eabf-1e7c-4693-a8ac-0ef6e588daa6\") " pod="kube-system/kube-proxy-qr7qq" Jan 30 14:13:30.984506 kubelet[2952]: I0130 14:13:30.982418 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptfbh\" (UniqueName: \"kubernetes.io/projected/5033eabf-1e7c-4693-a8ac-0ef6e588daa6-kube-api-access-ptfbh\") pod \"kube-proxy-qr7qq\" (UID: \"5033eabf-1e7c-4693-a8ac-0ef6e588daa6\") " pod="kube-system/kube-proxy-qr7qq" Jan 30 14:13:30.984506 kubelet[2952]: I0130 14:13:30.982436 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-host-proc-sys-kernel\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984506 kubelet[2952]: I0130 14:13:30.982455 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-run\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984506 kubelet[2952]: I0130 14:13:30.982471 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cni-path\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984506 kubelet[2952]: I0130 14:13:30.982489 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-etc-cni-netd\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984506 kubelet[2952]: I0130 14:13:30.982509 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-hostproc\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:30.984639 kubelet[2952]: I0130 14:13:30.982542 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-cgroup\") pod \"cilium-2n4l9\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " pod="kube-system/cilium-2n4l9" Jan 30 14:13:31.083156 kubelet[2952]: I0130 14:13:31.083000 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfdwk\" (UniqueName: \"kubernetes.io/projected/9f4010f9-c023-47d3-8a89-49c973adb9ba-kube-api-access-wfdwk\") pod \"cilium-operator-599987898-l4jkn\" (UID: \"9f4010f9-c023-47d3-8a89-49c973adb9ba\") " pod="kube-system/cilium-operator-599987898-l4jkn" Jan 30 14:13:31.086737 kubelet[2952]: I0130 14:13:31.085430 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f4010f9-c023-47d3-8a89-49c973adb9ba-cilium-config-path\") pod \"cilium-operator-599987898-l4jkn\" (UID: \"9f4010f9-c023-47d3-8a89-49c973adb9ba\") " pod="kube-system/cilium-operator-599987898-l4jkn" Jan 30 14:13:31.155357 containerd[1593]: time="2025-01-30T14:13:31.155298733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qr7qq,Uid:5033eabf-1e7c-4693-a8ac-0ef6e588daa6,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:31.156861 containerd[1593]: time="2025-01-30T14:13:31.156637575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2n4l9,Uid:f3184743-144f-4ee2-a12a-411c429df7ec,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:31.204696 containerd[1593]: time="2025-01-30T14:13:31.204602176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:31.204830 containerd[1593]: time="2025-01-30T14:13:31.204667976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:31.204830 containerd[1593]: time="2025-01-30T14:13:31.204683616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:31.204830 containerd[1593]: time="2025-01-30T14:13:31.204770777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:31.208029 containerd[1593]: time="2025-01-30T14:13:31.207917542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:31.208287 containerd[1593]: time="2025-01-30T14:13:31.208120622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:31.208287 containerd[1593]: time="2025-01-30T14:13:31.208139822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:31.208600 containerd[1593]: time="2025-01-30T14:13:31.208509663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:31.257686 containerd[1593]: time="2025-01-30T14:13:31.257646626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2n4l9,Uid:f3184743-144f-4ee2-a12a-411c429df7ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\"" Jan 30 14:13:31.261792 containerd[1593]: time="2025-01-30T14:13:31.261548473Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 14:13:31.266236 containerd[1593]: time="2025-01-30T14:13:31.265987561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l4jkn,Uid:9f4010f9-c023-47d3-8a89-49c973adb9ba,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:31.275375 containerd[1593]: time="2025-01-30T14:13:31.275184736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qr7qq,Uid:5033eabf-1e7c-4693-a8ac-0ef6e588daa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5de2b3efca77705f8ee920b25557d8e63c8bb56923a3154c91b478d04ab9171\"" Jan 30 14:13:31.280848 containerd[1593]: time="2025-01-30T14:13:31.280797306Z" level=info msg="CreateContainer within sandbox \"a5de2b3efca77705f8ee920b25557d8e63c8bb56923a3154c91b478d04ab9171\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:13:31.294575 containerd[1593]: time="2025-01-30T14:13:31.293816248Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:31.295146 containerd[1593]: time="2025-01-30T14:13:31.294811610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:31.295146 containerd[1593]: time="2025-01-30T14:13:31.294915450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:31.295420 containerd[1593]: time="2025-01-30T14:13:31.295299410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:31.298730 containerd[1593]: time="2025-01-30T14:13:31.298674656Z" level=info msg="CreateContainer within sandbox \"a5de2b3efca77705f8ee920b25557d8e63c8bb56923a3154c91b478d04ab9171\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f99012d30648f168e2087705f6eac553d3c43f40214f158dcba9667874e3116e\"" Jan 30 14:13:31.299531 containerd[1593]: time="2025-01-30T14:13:31.299502938Z" level=info msg="StartContainer for \"f99012d30648f168e2087705f6eac553d3c43f40214f158dcba9667874e3116e\"" Jan 30 14:13:31.356074 containerd[1593]: time="2025-01-30T14:13:31.356040314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l4jkn,Uid:9f4010f9-c023-47d3-8a89-49c973adb9ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\"" Jan 30 14:13:31.378373 containerd[1593]: time="2025-01-30T14:13:31.378162711Z" level=info msg="StartContainer for \"f99012d30648f168e2087705f6eac553d3c43f40214f158dcba9667874e3116e\" returns successfully" Jan 30 14:13:35.203689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079559408.mount: Deactivated successfully. Jan 30 14:13:36.557467 containerd[1593]: time="2025-01-30T14:13:36.557387484Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:36.558659 containerd[1593]: time="2025-01-30T14:13:36.558615606Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 14:13:36.561144 containerd[1593]: time="2025-01-30T14:13:36.559344807Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:36.561144 containerd[1593]: time="2025-01-30T14:13:36.561009609Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.299419856s" Jan 30 14:13:36.561144 containerd[1593]: time="2025-01-30T14:13:36.561046209Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 14:13:36.564151 containerd[1593]: time="2025-01-30T14:13:36.564119414Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 14:13:36.566295 containerd[1593]: time="2025-01-30T14:13:36.566262257Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:13:36.580190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1433753765.mount: Deactivated successfully. Jan 30 14:13:36.582844 containerd[1593]: time="2025-01-30T14:13:36.582801681Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\"" Jan 30 14:13:36.588031 containerd[1593]: time="2025-01-30T14:13:36.587994209Z" level=info msg="StartContainer for \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\"" Jan 30 14:13:36.650129 containerd[1593]: time="2025-01-30T14:13:36.650081379Z" level=info msg="StartContainer for \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\" returns successfully" Jan 30 14:13:36.813800 containerd[1593]: time="2025-01-30T14:13:36.813600058Z" level=info msg="shim disconnected" id=ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce namespace=k8s.io Jan 30 14:13:36.813800 containerd[1593]: time="2025-01-30T14:13:36.813690658Z" level=warning msg="cleaning up after shim disconnected" id=ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce namespace=k8s.io Jan 30 14:13:36.813800 containerd[1593]: time="2025-01-30T14:13:36.813702098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:13:36.828337 containerd[1593]: time="2025-01-30T14:13:36.828284159Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:13:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:13:37.116644 containerd[1593]: time="2025-01-30T14:13:37.115545933Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:13:37.141964 kubelet[2952]: I0130 14:13:37.141892 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qr7qq" podStartSLOduration=7.14186249 podStartE2EDuration="7.14186249s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:32.09810757 +0000 UTC m=+17.227813691" watchObservedRunningTime="2025-01-30 14:13:37.14186249 +0000 UTC m=+22.271568611" Jan 30 14:13:37.148097 containerd[1593]: time="2025-01-30T14:13:37.147912739Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\"" Jan 30 14:13:37.149518 containerd[1593]: time="2025-01-30T14:13:37.148864020Z" level=info msg="StartContainer for \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\"" Jan 30 14:13:37.216700 containerd[1593]: time="2025-01-30T14:13:37.216587516Z" level=info msg="StartContainer for \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\" returns successfully" Jan 30 14:13:37.233407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:13:37.234731 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:13:37.234815 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:13:37.241964 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:13:37.270645 containerd[1593]: time="2025-01-30T14:13:37.270563353Z" level=info msg="shim disconnected" id=78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7 namespace=k8s.io Jan 30 14:13:37.270645 containerd[1593]: time="2025-01-30T14:13:37.270640433Z" level=warning msg="cleaning up after shim disconnected" id=78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7 namespace=k8s.io Jan 30 14:13:37.270645 containerd[1593]: time="2025-01-30T14:13:37.270650633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:13:37.274654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:13:37.577278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce-rootfs.mount: Deactivated successfully. Jan 30 14:13:38.110921 containerd[1593]: time="2025-01-30T14:13:38.110881097Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:13:38.146621 containerd[1593]: time="2025-01-30T14:13:38.146506746Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\"" Jan 30 14:13:38.149460 containerd[1593]: time="2025-01-30T14:13:38.147426107Z" level=info msg="StartContainer for \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\"" Jan 30 14:13:38.225528 containerd[1593]: time="2025-01-30T14:13:38.225037254Z" level=info msg="StartContainer for \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\" returns successfully" Jan 30 14:13:38.260803 containerd[1593]: time="2025-01-30T14:13:38.260448823Z" level=info msg="shim disconnected" id=ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417 namespace=k8s.io Jan 30 14:13:38.260803 containerd[1593]: time="2025-01-30T14:13:38.260516383Z" level=warning msg="cleaning up after shim disconnected" id=ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417 namespace=k8s.io Jan 30 14:13:38.260803 containerd[1593]: time="2025-01-30T14:13:38.260527583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:13:38.576494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417-rootfs.mount: Deactivated successfully. Jan 30 14:13:38.793864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905805782.mount: Deactivated successfully. Jan 30 14:13:39.124136 containerd[1593]: time="2025-01-30T14:13:39.124082283Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:13:39.146244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount789873420.mount: Deactivated successfully. Jan 30 14:13:39.150104 containerd[1593]: time="2025-01-30T14:13:39.149951118Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\"" Jan 30 14:13:39.150931 containerd[1593]: time="2025-01-30T14:13:39.150845359Z" level=info msg="StartContainer for \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\"" Jan 30 14:13:39.223629 containerd[1593]: time="2025-01-30T14:13:39.223538096Z" level=info msg="StartContainer for \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\" returns successfully" Jan 30 14:13:39.287081 containerd[1593]: time="2025-01-30T14:13:39.286883100Z" level=info msg="shim disconnected" id=513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f namespace=k8s.io Jan 30 14:13:39.287081 containerd[1593]: time="2025-01-30T14:13:39.286935540Z" level=warning msg="cleaning up after shim disconnected" id=513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f namespace=k8s.io Jan 30 14:13:39.287081 containerd[1593]: time="2025-01-30T14:13:39.286944380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:13:39.337541 containerd[1593]: time="2025-01-30T14:13:39.336688366Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:39.337541 containerd[1593]: time="2025-01-30T14:13:39.337490687Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 14:13:39.338054 containerd[1593]: time="2025-01-30T14:13:39.337988008Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:13:39.340735 containerd[1593]: time="2025-01-30T14:13:39.340443531Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.776095477s" Jan 30 14:13:39.340735 containerd[1593]: time="2025-01-30T14:13:39.340515691Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 14:13:39.344204 containerd[1593]: time="2025-01-30T14:13:39.343909376Z" level=info msg="CreateContainer within sandbox \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 14:13:39.354499 containerd[1593]: time="2025-01-30T14:13:39.354450190Z" level=info msg="CreateContainer within sandbox \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\"" Jan 30 14:13:39.356529 containerd[1593]: time="2025-01-30T14:13:39.356126792Z" level=info msg="StartContainer for \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\"" Jan 30 14:13:39.417964 containerd[1593]: time="2025-01-30T14:13:39.417476074Z" level=info msg="StartContainer for \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\" returns successfully" Jan 30 14:13:40.143374 containerd[1593]: time="2025-01-30T14:13:40.142796154Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:13:40.168246 containerd[1593]: time="2025-01-30T14:13:40.168198387Z" level=info msg="CreateContainer within sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\"" Jan 30 14:13:40.173291 containerd[1593]: time="2025-01-30T14:13:40.173241074Z" level=info msg="StartContainer for \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\"" Jan 30 14:13:40.189003 kubelet[2952]: I0130 14:13:40.188915 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-l4jkn" podStartSLOduration=2.2061922 podStartE2EDuration="10.188896414s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="2025-01-30 14:13:31.359140079 +0000 UTC m=+16.488846200" lastFinishedPulling="2025-01-30 14:13:39.341844293 +0000 UTC m=+24.471550414" observedRunningTime="2025-01-30 14:13:40.15480005 +0000 UTC m=+25.284506211" watchObservedRunningTime="2025-01-30 14:13:40.188896414 +0000 UTC m=+25.318602535" Jan 30 14:13:40.280612 containerd[1593]: time="2025-01-30T14:13:40.280546612Z" level=info msg="StartContainer for \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\" returns successfully" Jan 30 14:13:40.446175 kubelet[2952]: I0130 14:13:40.445884 2952 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:13:40.482361 kubelet[2952]: I0130 14:13:40.481838 2952 topology_manager.go:215] "Topology Admit Handler" podUID="846c3ee3-a8bf-4490-b814-43b9ae2d1a15" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wf9m5" Jan 30 14:13:40.484608 kubelet[2952]: I0130 14:13:40.483583 2952 topology_manager.go:215] "Topology Admit Handler" podUID="9eced152-0573-4e2f-925e-28ec0717c307" podNamespace="kube-system" podName="coredns-7db6d8ff4d-m5fx4" Jan 30 14:13:40.658308 kubelet[2952]: I0130 14:13:40.658238 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmrxd\" (UniqueName: \"kubernetes.io/projected/9eced152-0573-4e2f-925e-28ec0717c307-kube-api-access-qmrxd\") pod \"coredns-7db6d8ff4d-m5fx4\" (UID: \"9eced152-0573-4e2f-925e-28ec0717c307\") " pod="kube-system/coredns-7db6d8ff4d-m5fx4" Jan 30 14:13:40.658308 kubelet[2952]: I0130 14:13:40.658291 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/846c3ee3-a8bf-4490-b814-43b9ae2d1a15-config-volume\") pod \"coredns-7db6d8ff4d-wf9m5\" (UID: \"846c3ee3-a8bf-4490-b814-43b9ae2d1a15\") " pod="kube-system/coredns-7db6d8ff4d-wf9m5" Jan 30 14:13:40.658834 kubelet[2952]: I0130 14:13:40.658663 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eced152-0573-4e2f-925e-28ec0717c307-config-volume\") pod \"coredns-7db6d8ff4d-m5fx4\" (UID: \"9eced152-0573-4e2f-925e-28ec0717c307\") " pod="kube-system/coredns-7db6d8ff4d-m5fx4" Jan 30 14:13:40.658834 kubelet[2952]: I0130 14:13:40.658800 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75lxb\" (UniqueName: \"kubernetes.io/projected/846c3ee3-a8bf-4490-b814-43b9ae2d1a15-kube-api-access-75lxb\") pod \"coredns-7db6d8ff4d-wf9m5\" (UID: \"846c3ee3-a8bf-4490-b814-43b9ae2d1a15\") " pod="kube-system/coredns-7db6d8ff4d-wf9m5" Jan 30 14:13:40.799006 containerd[1593]: time="2025-01-30T14:13:40.798699882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wf9m5,Uid:846c3ee3-a8bf-4490-b814-43b9ae2d1a15,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:41.097056 containerd[1593]: time="2025-01-30T14:13:41.096388063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m5fx4,Uid:9eced152-0573-4e2f-925e-28ec0717c307,Namespace:kube-system,Attempt:0,}" Jan 30 14:13:41.167978 kubelet[2952]: I0130 14:13:41.167896 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2n4l9" podStartSLOduration=5.865187691 podStartE2EDuration="11.167875352s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="2025-01-30 14:13:31.25985839 +0000 UTC m=+16.389564511" lastFinishedPulling="2025-01-30 14:13:36.562546051 +0000 UTC m=+21.692252172" observedRunningTime="2025-01-30 14:13:41.167240792 +0000 UTC m=+26.296946913" watchObservedRunningTime="2025-01-30 14:13:41.167875352 +0000 UTC m=+26.297581473" Jan 30 14:13:43.364485 systemd-networkd[1241]: cilium_host: Link UP Jan 30 14:13:43.364715 systemd-networkd[1241]: cilium_net: Link UP Jan 30 14:13:43.364851 systemd-networkd[1241]: cilium_net: Gained carrier Jan 30 14:13:43.364964 systemd-networkd[1241]: cilium_host: Gained carrier Jan 30 14:13:43.500178 systemd-networkd[1241]: cilium_vxlan: Link UP Jan 30 14:13:43.500188 systemd-networkd[1241]: cilium_vxlan: Gained carrier Jan 30 14:13:43.801438 kernel: NET: Registered PF_ALG protocol family Jan 30 14:13:44.248874 systemd-networkd[1241]: cilium_host: Gained IPv6LL Jan 30 14:13:44.312629 systemd-networkd[1241]: cilium_net: Gained IPv6LL Jan 30 14:13:44.540274 systemd-networkd[1241]: lxc_health: Link UP Jan 30 14:13:44.548381 systemd-networkd[1241]: lxc_health: Gained carrier Jan 30 14:13:44.696643 systemd-networkd[1241]: cilium_vxlan: Gained IPv6LL Jan 30 14:13:44.865282 systemd-networkd[1241]: lxcfc077e0ceab1: Link UP Jan 30 14:13:44.873585 kernel: eth0: renamed from tmpf50c5 Jan 30 14:13:44.883670 systemd-networkd[1241]: lxcfc077e0ceab1: Gained carrier Jan 30 14:13:45.145585 systemd-networkd[1241]: lxcf4e83511c120: Link UP Jan 30 14:13:45.152437 kernel: eth0: renamed from tmpa3682 Jan 30 14:13:45.161859 systemd-networkd[1241]: lxcf4e83511c120: Gained carrier Jan 30 14:13:45.593064 systemd-networkd[1241]: lxc_health: Gained IPv6LL Jan 30 14:13:46.106593 systemd-networkd[1241]: lxcfc077e0ceab1: Gained IPv6LL Jan 30 14:13:46.424598 systemd-networkd[1241]: lxcf4e83511c120: Gained IPv6LL Jan 30 14:13:49.053262 containerd[1593]: time="2025-01-30T14:13:49.050137318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:49.053262 containerd[1593]: time="2025-01-30T14:13:49.050576919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:49.053262 containerd[1593]: time="2025-01-30T14:13:49.050590599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:49.053262 containerd[1593]: time="2025-01-30T14:13:49.050834279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:49.094627 containerd[1593]: time="2025-01-30T14:13:49.093139121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:13:49.094627 containerd[1593]: time="2025-01-30T14:13:49.093200921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:13:49.094627 containerd[1593]: time="2025-01-30T14:13:49.093228441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:49.094627 containerd[1593]: time="2025-01-30T14:13:49.093318001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:13:49.181665 containerd[1593]: time="2025-01-30T14:13:49.181493888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-m5fx4,Uid:9eced152-0573-4e2f-925e-28ec0717c307,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3682f0c7f4f1f3375f1c779e88737dac8b53f1caa4b52df6c7b4c2e42bf47b7\"" Jan 30 14:13:49.188583 containerd[1593]: time="2025-01-30T14:13:49.188486415Z" level=info msg="CreateContainer within sandbox \"a3682f0c7f4f1f3375f1c779e88737dac8b53f1caa4b52df6c7b4c2e42bf47b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:13:49.196027 containerd[1593]: time="2025-01-30T14:13:49.195557982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wf9m5,Uid:846c3ee3-a8bf-4490-b814-43b9ae2d1a15,Namespace:kube-system,Attempt:0,} returns sandbox id \"f50c5af97632e87a094de50ca5dbacc1693d452450b11752df8cc31a45437e8a\"" Jan 30 14:13:49.201575 containerd[1593]: time="2025-01-30T14:13:49.201359667Z" level=info msg="CreateContainer within sandbox \"f50c5af97632e87a094de50ca5dbacc1693d452450b11752df8cc31a45437e8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:13:49.221819 containerd[1593]: time="2025-01-30T14:13:49.221617927Z" level=info msg="CreateContainer within sandbox \"a3682f0c7f4f1f3375f1c779e88737dac8b53f1caa4b52df6c7b4c2e42bf47b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05bfcffc5b25065d180d7523257c33e7c809202a63b20b1cfb9e56b5217c415e\"" Jan 30 14:13:49.223608 containerd[1593]: time="2025-01-30T14:13:49.222619128Z" level=info msg="StartContainer for \"05bfcffc5b25065d180d7523257c33e7c809202a63b20b1cfb9e56b5217c415e\"" Jan 30 14:13:49.223608 containerd[1593]: time="2025-01-30T14:13:49.222632208Z" level=info msg="CreateContainer within sandbox \"f50c5af97632e87a094de50ca5dbacc1693d452450b11752df8cc31a45437e8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5abe37e7dbffc9063c9d5f9eb600c058607e6e55073e3185b790ccce4564f54\"" Jan 30 14:13:49.223608 containerd[1593]: time="2025-01-30T14:13:49.223502289Z" level=info msg="StartContainer for \"b5abe37e7dbffc9063c9d5f9eb600c058607e6e55073e3185b790ccce4564f54\"" Jan 30 14:13:49.294666 containerd[1593]: time="2025-01-30T14:13:49.294607239Z" level=info msg="StartContainer for \"05bfcffc5b25065d180d7523257c33e7c809202a63b20b1cfb9e56b5217c415e\" returns successfully" Jan 30 14:13:49.307868 containerd[1593]: time="2025-01-30T14:13:49.307388492Z" level=info msg="StartContainer for \"b5abe37e7dbffc9063c9d5f9eb600c058607e6e55073e3185b790ccce4564f54\" returns successfully" Jan 30 14:13:50.213590 kubelet[2952]: I0130 14:13:50.212572 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wf9m5" podStartSLOduration=20.212551018 podStartE2EDuration="20.212551018s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:50.212519538 +0000 UTC m=+35.342225659" watchObservedRunningTime="2025-01-30 14:13:50.212551018 +0000 UTC m=+35.342257139" Jan 30 14:13:50.213590 kubelet[2952]: I0130 14:13:50.212675 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-m5fx4" podStartSLOduration=20.212671458 podStartE2EDuration="20.212671458s" podCreationTimestamp="2025-01-30 14:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:13:50.19347136 +0000 UTC m=+35.323177561" watchObservedRunningTime="2025-01-30 14:13:50.212671458 +0000 UTC m=+35.342377619" Jan 30 14:13:51.707133 kubelet[2952]: I0130 14:13:51.706904 2952 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:18:05.143520 systemd[1]: Started sshd@7-138.199.157.113:22-139.178.68.195:42164.service - OpenSSH per-connection server daemon (139.178.68.195:42164). Jan 30 14:18:06.149287 sshd[4344]: Accepted publickey for core from 139.178.68.195 port 42164 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:06.150094 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:06.158118 systemd-logind[1556]: New session 8 of user core. Jan 30 14:18:06.166128 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:18:07.059745 sshd[4344]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:07.066984 systemd[1]: sshd@7-138.199.157.113:22-139.178.68.195:42164.service: Deactivated successfully. Jan 30 14:18:07.071711 systemd-logind[1556]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:18:07.072887 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:18:07.075193 systemd-logind[1556]: Removed session 8. Jan 30 14:18:12.206741 systemd[1]: Started sshd@8-138.199.157.113:22-139.178.68.195:42172.service - OpenSSH per-connection server daemon (139.178.68.195:42172). Jan 30 14:18:13.189078 sshd[4359]: Accepted publickey for core from 139.178.68.195 port 42172 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:13.191304 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:13.198473 systemd-logind[1556]: New session 9 of user core. Jan 30 14:18:13.202725 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:18:13.936145 sshd[4359]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:13.940223 systemd[1]: sshd@8-138.199.157.113:22-139.178.68.195:42172.service: Deactivated successfully. Jan 30 14:18:13.947968 systemd-logind[1556]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:18:13.948049 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:18:13.950984 systemd-logind[1556]: Removed session 9. Jan 30 14:18:19.102648 systemd[1]: Started sshd@9-138.199.157.113:22-139.178.68.195:58850.service - OpenSSH per-connection server daemon (139.178.68.195:58850). Jan 30 14:18:20.075492 sshd[4376]: Accepted publickey for core from 139.178.68.195 port 58850 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:20.076999 sshd[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:20.084270 systemd-logind[1556]: New session 10 of user core. Jan 30 14:18:20.085752 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:18:20.839838 sshd[4376]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:20.846687 systemd[1]: sshd@9-138.199.157.113:22-139.178.68.195:58850.service: Deactivated successfully. Jan 30 14:18:20.851692 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:18:20.852606 systemd-logind[1556]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:18:20.853803 systemd-logind[1556]: Removed session 10. Jan 30 14:18:21.012906 systemd[1]: Started sshd@10-138.199.157.113:22-139.178.68.195:58852.service - OpenSSH per-connection server daemon (139.178.68.195:58852). Jan 30 14:18:21.993691 sshd[4391]: Accepted publickey for core from 139.178.68.195 port 58852 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:21.997457 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:22.001828 systemd-logind[1556]: New session 11 of user core. Jan 30 14:18:22.008785 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:18:22.786702 sshd[4391]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:22.793764 systemd-logind[1556]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:18:22.793919 systemd[1]: sshd@10-138.199.157.113:22-139.178.68.195:58852.service: Deactivated successfully. Jan 30 14:18:22.797286 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:18:22.800336 systemd-logind[1556]: Removed session 11. Jan 30 14:18:22.954766 systemd[1]: Started sshd@11-138.199.157.113:22-139.178.68.195:58866.service - OpenSSH per-connection server daemon (139.178.68.195:58866). Jan 30 14:18:23.932755 sshd[4403]: Accepted publickey for core from 139.178.68.195 port 58866 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:23.934897 sshd[4403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:23.940185 systemd-logind[1556]: New session 12 of user core. Jan 30 14:18:23.947970 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:18:24.684748 sshd[4403]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:24.690242 systemd[1]: sshd@11-138.199.157.113:22-139.178.68.195:58866.service: Deactivated successfully. Jan 30 14:18:24.690435 systemd-logind[1556]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:18:24.695744 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:18:24.697950 systemd-logind[1556]: Removed session 12. Jan 30 14:18:29.853707 systemd[1]: Started sshd@12-138.199.157.113:22-139.178.68.195:51936.service - OpenSSH per-connection server daemon (139.178.68.195:51936). Jan 30 14:18:30.841046 sshd[4417]: Accepted publickey for core from 139.178.68.195 port 51936 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:30.843723 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:30.848938 systemd-logind[1556]: New session 13 of user core. Jan 30 14:18:30.854711 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:18:31.601834 sshd[4417]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:31.609275 systemd[1]: sshd@12-138.199.157.113:22-139.178.68.195:51936.service: Deactivated successfully. Jan 30 14:18:31.614165 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:18:31.615629 systemd-logind[1556]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:18:31.617894 systemd-logind[1556]: Removed session 13. Jan 30 14:18:31.764880 systemd[1]: Started sshd@13-138.199.157.113:22-139.178.68.195:51940.service - OpenSSH per-connection server daemon (139.178.68.195:51940). Jan 30 14:18:32.745135 sshd[4434]: Accepted publickey for core from 139.178.68.195 port 51940 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:32.747805 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:32.755856 systemd-logind[1556]: New session 14 of user core. Jan 30 14:18:32.759808 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:18:33.542212 sshd[4434]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:33.548737 systemd-logind[1556]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:18:33.549185 systemd[1]: sshd@13-138.199.157.113:22-139.178.68.195:51940.service: Deactivated successfully. Jan 30 14:18:33.555290 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:18:33.557497 systemd-logind[1556]: Removed session 14. Jan 30 14:18:33.710852 systemd[1]: Started sshd@14-138.199.157.113:22-139.178.68.195:51950.service - OpenSSH per-connection server daemon (139.178.68.195:51950). Jan 30 14:18:34.688997 sshd[4446]: Accepted publickey for core from 139.178.68.195 port 51950 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:34.690973 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:34.696555 systemd-logind[1556]: New session 15 of user core. Jan 30 14:18:34.700356 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:18:37.093653 sshd[4446]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:37.106001 systemd[1]: sshd@14-138.199.157.113:22-139.178.68.195:51950.service: Deactivated successfully. Jan 30 14:18:37.110733 systemd-logind[1556]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:18:37.110842 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:18:37.113954 systemd-logind[1556]: Removed session 15. Jan 30 14:18:37.258120 systemd[1]: Started sshd@15-138.199.157.113:22-139.178.68.195:39736.service - OpenSSH per-connection server daemon (139.178.68.195:39736). Jan 30 14:18:38.233541 sshd[4465]: Accepted publickey for core from 139.178.68.195 port 39736 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:38.237348 sshd[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:38.243804 systemd-logind[1556]: New session 16 of user core. Jan 30 14:18:38.246712 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:18:39.094862 sshd[4465]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:39.100784 systemd-logind[1556]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:18:39.101663 systemd[1]: sshd@15-138.199.157.113:22-139.178.68.195:39736.service: Deactivated successfully. Jan 30 14:18:39.105944 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:18:39.107668 systemd-logind[1556]: Removed session 16. Jan 30 14:18:39.260661 systemd[1]: Started sshd@16-138.199.157.113:22-139.178.68.195:39748.service - OpenSSH per-connection server daemon (139.178.68.195:39748). Jan 30 14:18:40.241470 sshd[4478]: Accepted publickey for core from 139.178.68.195 port 39748 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:40.242226 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:40.247279 systemd-logind[1556]: New session 17 of user core. Jan 30 14:18:40.258435 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:18:41.012472 sshd[4478]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:41.019848 systemd[1]: sshd@16-138.199.157.113:22-139.178.68.195:39748.service: Deactivated successfully. Jan 30 14:18:41.027281 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:18:41.028987 systemd-logind[1556]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:18:41.030977 systemd-logind[1556]: Removed session 17. Jan 30 14:18:46.183703 systemd[1]: Started sshd@17-138.199.157.113:22-139.178.68.195:42008.service - OpenSSH per-connection server daemon (139.178.68.195:42008). Jan 30 14:18:47.157338 sshd[4495]: Accepted publickey for core from 139.178.68.195 port 42008 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:47.159390 sshd[4495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:47.165615 systemd-logind[1556]: New session 18 of user core. Jan 30 14:18:47.171706 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:18:47.906455 sshd[4495]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:47.911993 systemd[1]: sshd@17-138.199.157.113:22-139.178.68.195:42008.service: Deactivated successfully. Jan 30 14:18:47.916575 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:18:47.917902 systemd-logind[1556]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:18:47.919236 systemd-logind[1556]: Removed session 18. Jan 30 14:18:52.884131 update_engine[1559]: I20250130 14:18:52.884049 1559 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 14:18:52.884131 update_engine[1559]: I20250130 14:18:52.884107 1559 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 14:18:52.884906 update_engine[1559]: I20250130 14:18:52.884470 1559 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 14:18:52.885330 update_engine[1559]: I20250130 14:18:52.885243 1559 omaha_request_params.cc:62] Current group set to lts Jan 30 14:18:52.885679 update_engine[1559]: I20250130 14:18:52.885437 1559 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 14:18:52.885679 update_engine[1559]: I20250130 14:18:52.885469 1559 update_attempter.cc:643] Scheduling an action processor start. Jan 30 14:18:52.885679 update_engine[1559]: I20250130 14:18:52.885502 1559 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:18:52.885679 update_engine[1559]: I20250130 14:18:52.885560 1559 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 14:18:52.885679 update_engine[1559]: I20250130 14:18:52.885704 1559 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:18:52.885679 update_engine[1559]: I20250130 14:18:52.885725 1559 omaha_request_action.cc:272] Request: Jan 30 14:18:52.885679 update_engine[1559]: Jan 30 14:18:52.885679 update_engine[1559]: Jan 30 14:18:52.885679 update_engine[1559]: Jan 30 14:18:52.885679 update_engine[1559]: Jan 30 14:18:52.885679 update_engine[1559]: Jan 30 14:18:52.885679 update_engine[1559]: Jan 30 14:18:52.885679 update_engine[1559]: Jan 30 14:18:52.885679 update_engine[1559]: Jan 30 14:18:52.885679 update_engine[1559]: I20250130 14:18:52.885738 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:18:52.887478 locksmithd[1603]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 14:18:52.889098 update_engine[1559]: I20250130 14:18:52.889052 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:18:52.889706 update_engine[1559]: I20250130 14:18:52.889658 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:18:52.891366 update_engine[1559]: E20250130 14:18:52.891333 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:18:52.891443 update_engine[1559]: I20250130 14:18:52.891427 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 14:18:53.074761 systemd[1]: Started sshd@18-138.199.157.113:22-139.178.68.195:42012.service - OpenSSH per-connection server daemon (139.178.68.195:42012). Jan 30 14:18:54.049886 sshd[4509]: Accepted publickey for core from 139.178.68.195 port 42012 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:54.052347 sshd[4509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:54.059700 systemd-logind[1556]: New session 19 of user core. Jan 30 14:18:54.065797 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:18:54.796825 sshd[4509]: pam_unix(sshd:session): session closed for user core Jan 30 14:18:54.803230 systemd[1]: sshd@18-138.199.157.113:22-139.178.68.195:42012.service: Deactivated successfully. Jan 30 14:18:54.807896 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:18:54.809184 systemd-logind[1556]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:18:54.810204 systemd-logind[1556]: Removed session 19. Jan 30 14:18:54.970993 systemd[1]: Started sshd@19-138.199.157.113:22-139.178.68.195:33178.service - OpenSSH per-connection server daemon (139.178.68.195:33178). Jan 30 14:18:55.948811 sshd[4523]: Accepted publickey for core from 139.178.68.195 port 33178 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:18:55.951523 sshd[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:18:55.958592 systemd-logind[1556]: New session 20 of user core. Jan 30 14:18:55.963759 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:18:58.282423 containerd[1593]: time="2025-01-30T14:18:58.281701574Z" level=info msg="StopContainer for \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\" with timeout 30 (s)" Jan 30 14:18:58.287440 containerd[1593]: time="2025-01-30T14:18:58.286055649Z" level=info msg="Stop container \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\" with signal terminated" Jan 30 14:18:58.303538 containerd[1593]: time="2025-01-30T14:18:58.303451629Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:18:58.314300 containerd[1593]: time="2025-01-30T14:18:58.314130356Z" level=info msg="StopContainer for \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\" with timeout 2 (s)" Jan 30 14:18:58.314753 containerd[1593]: time="2025-01-30T14:18:58.314706160Z" level=info msg="Stop container \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\" with signal terminated" Jan 30 14:18:58.325240 systemd-networkd[1241]: lxc_health: Link DOWN Jan 30 14:18:58.325253 systemd-networkd[1241]: lxc_health: Lost carrier Jan 30 14:18:58.344021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94-rootfs.mount: Deactivated successfully. Jan 30 14:18:58.363036 containerd[1593]: time="2025-01-30T14:18:58.362852709Z" level=info msg="shim disconnected" id=3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94 namespace=k8s.io Jan 30 14:18:58.363036 containerd[1593]: time="2025-01-30T14:18:58.362931830Z" level=warning msg="cleaning up after shim disconnected" id=3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94 namespace=k8s.io Jan 30 14:18:58.363036 containerd[1593]: time="2025-01-30T14:18:58.362955830Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:18:58.369928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185-rootfs.mount: Deactivated successfully. Jan 30 14:18:58.377772 containerd[1593]: time="2025-01-30T14:18:58.377599428Z" level=info msg="shim disconnected" id=f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185 namespace=k8s.io Jan 30 14:18:58.377772 containerd[1593]: time="2025-01-30T14:18:58.377654588Z" level=warning msg="cleaning up after shim disconnected" id=f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185 namespace=k8s.io Jan 30 14:18:58.377772 containerd[1593]: time="2025-01-30T14:18:58.377663148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:18:58.386532 containerd[1593]: time="2025-01-30T14:18:58.386148697Z" level=info msg="StopContainer for \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\" returns successfully" Jan 30 14:18:58.387054 containerd[1593]: time="2025-01-30T14:18:58.386844303Z" level=info msg="StopPodSandbox for \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\"" Jan 30 14:18:58.387054 containerd[1593]: time="2025-01-30T14:18:58.386882303Z" level=info msg="Container to stop \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:18:58.389056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9-shm.mount: Deactivated successfully. Jan 30 14:18:58.401961 containerd[1593]: time="2025-01-30T14:18:58.401797863Z" level=info msg="StopContainer for \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\" returns successfully" Jan 30 14:18:58.401961 containerd[1593]: time="2025-01-30T14:18:58.403550197Z" level=info msg="StopPodSandbox for \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\"" Jan 30 14:18:58.401961 containerd[1593]: time="2025-01-30T14:18:58.403595478Z" level=info msg="Container to stop \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:18:58.401961 containerd[1593]: time="2025-01-30T14:18:58.403608078Z" level=info msg="Container to stop \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:18:58.401961 containerd[1593]: time="2025-01-30T14:18:58.403657318Z" level=info msg="Container to stop \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:18:58.401961 containerd[1593]: time="2025-01-30T14:18:58.403667478Z" level=info msg="Container to stop \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:18:58.401961 containerd[1593]: time="2025-01-30T14:18:58.403676958Z" level=info msg="Container to stop \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 14:18:58.436649 containerd[1593]: time="2025-01-30T14:18:58.435080852Z" level=info msg="shim disconnected" id=057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9 namespace=k8s.io Jan 30 14:18:58.436649 containerd[1593]: time="2025-01-30T14:18:58.435150612Z" level=warning msg="cleaning up after shim disconnected" id=057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9 namespace=k8s.io Jan 30 14:18:58.436649 containerd[1593]: time="2025-01-30T14:18:58.435216373Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:18:58.448072 containerd[1593]: time="2025-01-30T14:18:58.448017596Z" level=info msg="shim disconnected" id=92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67 namespace=k8s.io Jan 30 14:18:58.448476 containerd[1593]: time="2025-01-30T14:18:58.448269198Z" level=warning msg="cleaning up after shim disconnected" id=92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67 namespace=k8s.io Jan 30 14:18:58.448476 containerd[1593]: time="2025-01-30T14:18:58.448287958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:18:58.451092 containerd[1593]: time="2025-01-30T14:18:58.450957660Z" level=info msg="TearDown network for sandbox \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\" successfully" Jan 30 14:18:58.451092 containerd[1593]: time="2025-01-30T14:18:58.450987420Z" level=info msg="StopPodSandbox for \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\" returns successfully" Jan 30 14:18:58.469563 containerd[1593]: time="2025-01-30T14:18:58.469439849Z" level=info msg="TearDown network for sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" successfully" Jan 30 14:18:58.469563 containerd[1593]: time="2025-01-30T14:18:58.469474329Z" level=info msg="StopPodSandbox for \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" returns successfully" Jan 30 14:18:58.503646 kubelet[2952]: I0130 14:18:58.502818 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-lib-modules\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.503646 kubelet[2952]: I0130 14:18:58.502921 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3184743-144f-4ee2-a12a-411c429df7ec-hubble-tls\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.503646 kubelet[2952]: I0130 14:18:58.502947 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.503646 kubelet[2952]: I0130 14:18:58.502961 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-host-proc-sys-kernel\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.503646 kubelet[2952]: I0130 14:18:58.503012 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.504724 kubelet[2952]: I0130 14:18:58.503051 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cni-path\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.504724 kubelet[2952]: I0130 14:18:58.503095 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vplw8\" (UniqueName: \"kubernetes.io/projected/f3184743-144f-4ee2-a12a-411c429df7ec-kube-api-access-vplw8\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.504724 kubelet[2952]: I0130 14:18:58.503125 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f4010f9-c023-47d3-8a89-49c973adb9ba-cilium-config-path\") pod \"9f4010f9-c023-47d3-8a89-49c973adb9ba\" (UID: \"9f4010f9-c023-47d3-8a89-49c973adb9ba\") " Jan 30 14:18:58.504724 kubelet[2952]: I0130 14:18:58.503151 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-host-proc-sys-net\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.504724 kubelet[2952]: I0130 14:18:58.503190 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-run\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.504724 kubelet[2952]: I0130 14:18:58.503219 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfdwk\" (UniqueName: \"kubernetes.io/projected/9f4010f9-c023-47d3-8a89-49c973adb9ba-kube-api-access-wfdwk\") pod \"9f4010f9-c023-47d3-8a89-49c973adb9ba\" (UID: \"9f4010f9-c023-47d3-8a89-49c973adb9ba\") " Jan 30 14:18:58.505112 kubelet[2952]: I0130 14:18:58.503244 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-etc-cni-netd\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.505112 kubelet[2952]: I0130 14:18:58.503270 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-cgroup\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.505112 kubelet[2952]: I0130 14:18:58.503295 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-bpf-maps\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.505112 kubelet[2952]: I0130 14:18:58.503324 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3184743-144f-4ee2-a12a-411c429df7ec-clustermesh-secrets\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.505112 kubelet[2952]: I0130 14:18:58.503352 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-config-path\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.505112 kubelet[2952]: I0130 14:18:58.503376 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-xtables-lock\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.505970 kubelet[2952]: I0130 14:18:58.503431 2952 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-hostproc\") pod \"f3184743-144f-4ee2-a12a-411c429df7ec\" (UID: \"f3184743-144f-4ee2-a12a-411c429df7ec\") " Jan 30 14:18:58.505970 kubelet[2952]: I0130 14:18:58.503481 2952 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-lib-modules\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.505970 kubelet[2952]: I0130 14:18:58.503499 2952 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-host-proc-sys-kernel\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.505970 kubelet[2952]: I0130 14:18:58.503534 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-hostproc" (OuterVolumeSpecName: "hostproc") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.505970 kubelet[2952]: I0130 14:18:58.503618 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cni-path" (OuterVolumeSpecName: "cni-path") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.508386 kubelet[2952]: I0130 14:18:58.507505 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.509128 kubelet[2952]: I0130 14:18:58.509100 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.509215 kubelet[2952]: I0130 14:18:58.509170 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.509344 kubelet[2952]: I0130 14:18:58.509320 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.509384 kubelet[2952]: I0130 14:18:58.509349 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.511036 kubelet[2952]: I0130 14:18:58.510157 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 14:18:58.512871 kubelet[2952]: I0130 14:18:58.512826 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f4010f9-c023-47d3-8a89-49c973adb9ba-kube-api-access-wfdwk" (OuterVolumeSpecName: "kube-api-access-wfdwk") pod "9f4010f9-c023-47d3-8a89-49c973adb9ba" (UID: "9f4010f9-c023-47d3-8a89-49c973adb9ba"). InnerVolumeSpecName "kube-api-access-wfdwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:18:58.512959 kubelet[2952]: I0130 14:18:58.512921 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3184743-144f-4ee2-a12a-411c429df7ec-kube-api-access-vplw8" (OuterVolumeSpecName: "kube-api-access-vplw8") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "kube-api-access-vplw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:18:58.512989 kubelet[2952]: I0130 14:18:58.512966 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3184743-144f-4ee2-a12a-411c429df7ec-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 14:18:58.513542 kubelet[2952]: I0130 14:18:58.513509 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f4010f9-c023-47d3-8a89-49c973adb9ba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9f4010f9-c023-47d3-8a89-49c973adb9ba" (UID: "9f4010f9-c023-47d3-8a89-49c973adb9ba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:18:58.514178 kubelet[2952]: I0130 14:18:58.514146 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3184743-144f-4ee2-a12a-411c429df7ec-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 14:18:58.515538 kubelet[2952]: I0130 14:18:58.515507 2952 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f3184743-144f-4ee2-a12a-411c429df7ec" (UID: "f3184743-144f-4ee2-a12a-411c429df7ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 14:18:58.605736 kubelet[2952]: I0130 14:18:58.605542 2952 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cni-path\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.605736 kubelet[2952]: I0130 14:18:58.605579 2952 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f3184743-144f-4ee2-a12a-411c429df7ec-hubble-tls\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.605736 kubelet[2952]: I0130 14:18:58.605592 2952 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9f4010f9-c023-47d3-8a89-49c973adb9ba-cilium-config-path\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.605736 kubelet[2952]: I0130 14:18:58.605603 2952 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vplw8\" (UniqueName: \"kubernetes.io/projected/f3184743-144f-4ee2-a12a-411c429df7ec-kube-api-access-vplw8\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.605736 kubelet[2952]: I0130 14:18:58.605614 2952 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-run\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.605736 kubelet[2952]: I0130 14:18:58.605625 2952 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wfdwk\" (UniqueName: \"kubernetes.io/projected/9f4010f9-c023-47d3-8a89-49c973adb9ba-kube-api-access-wfdwk\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.605736 kubelet[2952]: I0130 14:18:58.605636 2952 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-host-proc-sys-net\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.605736 kubelet[2952]: I0130 14:18:58.605648 2952 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-etc-cni-netd\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.606063 kubelet[2952]: I0130 14:18:58.605658 2952 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-cgroup\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.606063 kubelet[2952]: I0130 14:18:58.605667 2952 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f3184743-144f-4ee2-a12a-411c429df7ec-clustermesh-secrets\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.606063 kubelet[2952]: I0130 14:18:58.605678 2952 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-bpf-maps\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.606063 kubelet[2952]: I0130 14:18:58.605689 2952 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-xtables-lock\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.606063 kubelet[2952]: I0130 14:18:58.605699 2952 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f3184743-144f-4ee2-a12a-411c429df7ec-hostproc\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:58.606063 kubelet[2952]: I0130 14:18:58.605708 2952 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3184743-144f-4ee2-a12a-411c429df7ec-cilium-config-path\") on node \"ci-4081-3-0-0-5370901337\" DevicePath \"\"" Jan 30 14:18:59.000740 kubelet[2952]: I0130 14:18:58.999495 2952 scope.go:117] "RemoveContainer" containerID="3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94" Jan 30 14:18:59.002324 containerd[1593]: time="2025-01-30T14:18:59.001562704Z" level=info msg="RemoveContainer for \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\"" Jan 30 14:18:59.008503 containerd[1593]: time="2025-01-30T14:18:59.008457159Z" level=info msg="RemoveContainer for \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\" returns successfully" Jan 30 14:18:59.009345 kubelet[2952]: I0130 14:18:59.009273 2952 scope.go:117] "RemoveContainer" containerID="3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94" Jan 30 14:18:59.010081 containerd[1593]: time="2025-01-30T14:18:59.009959131Z" level=error msg="ContainerStatus for \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\": not found" Jan 30 14:18:59.010388 kubelet[2952]: E0130 14:18:59.010338 2952 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\": not found" containerID="3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94" Jan 30 14:18:59.010473 kubelet[2952]: I0130 14:18:59.010371 2952 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94"} err="failed to get container status \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\": rpc error: code = NotFound desc = an error occurred when try to find container \"3c77bb2b95878a1e596568c4aae7bb3bd1233ec985416c4ef891dd52b1d7af94\": not found" Jan 30 14:18:59.010473 kubelet[2952]: I0130 14:18:59.010464 2952 scope.go:117] "RemoveContainer" containerID="f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185" Jan 30 14:18:59.013272 containerd[1593]: time="2025-01-30T14:18:59.013242877Z" level=info msg="RemoveContainer for \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\"" Jan 30 14:18:59.018748 containerd[1593]: time="2025-01-30T14:18:59.018707521Z" level=info msg="RemoveContainer for \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\" returns successfully" Jan 30 14:18:59.019422 kubelet[2952]: I0130 14:18:59.019369 2952 scope.go:117] "RemoveContainer" containerID="513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f" Jan 30 14:18:59.023853 containerd[1593]: time="2025-01-30T14:18:59.023689241Z" level=info msg="RemoveContainer for \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\"" Jan 30 14:18:59.031205 containerd[1593]: time="2025-01-30T14:18:59.030966500Z" level=info msg="RemoveContainer for \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\" returns successfully" Jan 30 14:18:59.031598 kubelet[2952]: I0130 14:18:59.031384 2952 scope.go:117] "RemoveContainer" containerID="ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417" Jan 30 14:18:59.041734 containerd[1593]: time="2025-01-30T14:18:59.041039861Z" level=info msg="RemoveContainer for \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\"" Jan 30 14:18:59.046473 containerd[1593]: time="2025-01-30T14:18:59.045122853Z" level=info msg="RemoveContainer for \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\" returns successfully" Jan 30 14:18:59.046607 kubelet[2952]: I0130 14:18:59.045417 2952 scope.go:117] "RemoveContainer" containerID="78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7" Jan 30 14:18:59.049092 containerd[1593]: time="2025-01-30T14:18:59.048907884Z" level=info msg="RemoveContainer for \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\"" Jan 30 14:18:59.054959 containerd[1593]: time="2025-01-30T14:18:59.054904532Z" level=info msg="RemoveContainer for \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\" returns successfully" Jan 30 14:18:59.055867 kubelet[2952]: I0130 14:18:59.055707 2952 scope.go:117] "RemoveContainer" containerID="ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce" Jan 30 14:18:59.057799 containerd[1593]: time="2025-01-30T14:18:59.057704794Z" level=info msg="RemoveContainer for \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\"" Jan 30 14:18:59.062541 containerd[1593]: time="2025-01-30T14:18:59.062470152Z" level=info msg="RemoveContainer for \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\" returns successfully" Jan 30 14:18:59.063065 kubelet[2952]: I0130 14:18:59.062965 2952 scope.go:117] "RemoveContainer" containerID="f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185" Jan 30 14:18:59.063353 containerd[1593]: time="2025-01-30T14:18:59.063251279Z" level=error msg="ContainerStatus for \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\": not found" Jan 30 14:18:59.063889 kubelet[2952]: E0130 14:18:59.063839 2952 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\": not found" containerID="f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185" Jan 30 14:18:59.063889 kubelet[2952]: I0130 14:18:59.063880 2952 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185"} err="failed to get container status \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\": rpc error: code = NotFound desc = an error occurred when try to find container \"f071a2a87268fc2cbaba6ccf975dd8045ae36e189c597bedc32efd012ce9c185\": not found" Jan 30 14:18:59.064790 kubelet[2952]: I0130 14:18:59.063906 2952 scope.go:117] "RemoveContainer" containerID="513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f" Jan 30 14:18:59.064790 kubelet[2952]: E0130 14:18:59.064341 2952 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\": not found" containerID="513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f" Jan 30 14:18:59.064790 kubelet[2952]: I0130 14:18:59.064492 2952 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f"} err="failed to get container status \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\": not found" Jan 30 14:18:59.064790 kubelet[2952]: I0130 14:18:59.064535 2952 scope.go:117] "RemoveContainer" containerID="ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417" Jan 30 14:18:59.064914 containerd[1593]: time="2025-01-30T14:18:59.064099686Z" level=error msg="ContainerStatus for \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"513a778630cca0f0e04e0392cb3a387038f2a84b1fc06a39cbf2d59705df5a9f\": not found" Jan 30 14:18:59.064914 containerd[1593]: time="2025-01-30T14:18:59.064782291Z" level=error msg="ContainerStatus for \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\": not found" Jan 30 14:18:59.065137 kubelet[2952]: E0130 14:18:59.065120 2952 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\": not found" containerID="ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417" Jan 30 14:18:59.065294 kubelet[2952]: I0130 14:18:59.065231 2952 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417"} err="failed to get container status \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce5ead67dc84372754323b8af51051040e7c22ce60ad9e5462198e33d6d06417\": not found" Jan 30 14:18:59.065294 kubelet[2952]: I0130 14:18:59.065256 2952 scope.go:117] "RemoveContainer" containerID="78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7" Jan 30 14:18:59.065601 containerd[1593]: time="2025-01-30T14:18:59.065566537Z" level=error msg="ContainerStatus for \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\": not found" Jan 30 14:18:59.065901 kubelet[2952]: E0130 14:18:59.065714 2952 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\": not found" containerID="78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7" Jan 30 14:18:59.065901 kubelet[2952]: I0130 14:18:59.065746 2952 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7"} err="failed to get container status \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"78f01aab58838b15efeace46f119201bbe5f5d7aaedb711382e70f1094fe1cc7\": not found" Jan 30 14:18:59.065901 kubelet[2952]: I0130 14:18:59.065765 2952 scope.go:117] "RemoveContainer" containerID="ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce" Jan 30 14:18:59.066007 containerd[1593]: time="2025-01-30T14:18:59.065927980Z" level=error msg="ContainerStatus for \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\": not found" Jan 30 14:18:59.066149 kubelet[2952]: E0130 14:18:59.066123 2952 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\": not found" containerID="ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce" Jan 30 14:18:59.066231 kubelet[2952]: I0130 14:18:59.066156 2952 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce"} err="failed to get container status \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\": rpc error: code = NotFound desc = an error occurred when try to find container \"ae7e54512f0973746623648ddb3a159b18c293c4c9ff7f159f7ec588e711b3ce\": not found" Jan 30 14:18:59.283908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9-rootfs.mount: Deactivated successfully. Jan 30 14:18:59.284214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67-rootfs.mount: Deactivated successfully. Jan 30 14:18:59.284421 systemd[1]: var-lib-kubelet-pods-9f4010f9\x2dc023\x2d47d3\x2d8a89\x2d49c973adb9ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwfdwk.mount: Deactivated successfully. Jan 30 14:18:59.284662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67-shm.mount: Deactivated successfully. Jan 30 14:18:59.284847 systemd[1]: var-lib-kubelet-pods-f3184743\x2d144f\x2d4ee2\x2da12a\x2d411c429df7ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvplw8.mount: Deactivated successfully. Jan 30 14:18:59.285018 systemd[1]: var-lib-kubelet-pods-f3184743\x2d144f\x2d4ee2\x2da12a\x2d411c429df7ec-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 14:18:59.285212 systemd[1]: var-lib-kubelet-pods-f3184743\x2d144f\x2d4ee2\x2da12a\x2d411c429df7ec-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 14:19:00.190839 kubelet[2952]: E0130 14:19:00.190793 2952 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:19:00.359270 sshd[4523]: pam_unix(sshd:session): session closed for user core Jan 30 14:19:00.364263 systemd[1]: sshd@19-138.199.157.113:22-139.178.68.195:33178.service: Deactivated successfully. Jan 30 14:19:00.368843 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:19:00.369983 systemd-logind[1556]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:19:00.371270 systemd-logind[1556]: Removed session 20. Jan 30 14:19:00.529687 systemd[1]: Started sshd@20-138.199.157.113:22-139.178.68.195:33180.service - OpenSSH per-connection server daemon (139.178.68.195:33180). Jan 30 14:19:00.991527 kubelet[2952]: I0130 14:19:00.990782 2952 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f4010f9-c023-47d3-8a89-49c973adb9ba" path="/var/lib/kubelet/pods/9f4010f9-c023-47d3-8a89-49c973adb9ba/volumes" Jan 30 14:19:00.991942 kubelet[2952]: I0130 14:19:00.991907 2952 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3184743-144f-4ee2-a12a-411c429df7ec" path="/var/lib/kubelet/pods/f3184743-144f-4ee2-a12a-411c429df7ec/volumes" Jan 30 14:19:01.518547 sshd[4694]: Accepted publickey for core from 139.178.68.195 port 33180 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:19:01.521444 sshd[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:19:01.527981 systemd-logind[1556]: New session 21 of user core. Jan 30 14:19:01.531078 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:19:02.349424 kubelet[2952]: I0130 14:19:02.348887 2952 setters.go:580] "Node became not ready" node="ci-4081-3-0-0-5370901337" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T14:19:02Z","lastTransitionTime":"2025-01-30T14:19:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 14:19:02.882127 update_engine[1559]: I20250130 14:19:02.881981 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:19:02.883020 update_engine[1559]: I20250130 14:19:02.882766 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:19:02.883020 update_engine[1559]: I20250130 14:19:02.882970 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:19:02.883836 update_engine[1559]: E20250130 14:19:02.883760 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:19:02.883836 update_engine[1559]: I20250130 14:19:02.883813 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 14:19:02.967707 kubelet[2952]: I0130 14:19:02.967059 2952 topology_manager.go:215] "Topology Admit Handler" podUID="45e677d4-c7ae-499e-bb50-1ef2b8208716" podNamespace="kube-system" podName="cilium-8s6mt" Jan 30 14:19:02.967707 kubelet[2952]: E0130 14:19:02.967139 2952 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3184743-144f-4ee2-a12a-411c429df7ec" containerName="mount-bpf-fs" Jan 30 14:19:02.967707 kubelet[2952]: E0130 14:19:02.967150 2952 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3184743-144f-4ee2-a12a-411c429df7ec" containerName="clean-cilium-state" Jan 30 14:19:02.967707 kubelet[2952]: E0130 14:19:02.967156 2952 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f4010f9-c023-47d3-8a89-49c973adb9ba" containerName="cilium-operator" Jan 30 14:19:02.967707 kubelet[2952]: E0130 14:19:02.967162 2952 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3184743-144f-4ee2-a12a-411c429df7ec" containerName="cilium-agent" Jan 30 14:19:02.967707 kubelet[2952]: E0130 14:19:02.967168 2952 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3184743-144f-4ee2-a12a-411c429df7ec" containerName="apply-sysctl-overwrites" Jan 30 14:19:02.967707 kubelet[2952]: E0130 14:19:02.967189 2952 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f3184743-144f-4ee2-a12a-411c429df7ec" containerName="mount-cgroup" Jan 30 14:19:02.967707 kubelet[2952]: I0130 14:19:02.967218 2952 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f4010f9-c023-47d3-8a89-49c973adb9ba" containerName="cilium-operator" Jan 30 14:19:02.967707 kubelet[2952]: I0130 14:19:02.967224 2952 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3184743-144f-4ee2-a12a-411c429df7ec" containerName="cilium-agent" Jan 30 14:19:03.033583 kubelet[2952]: I0130 14:19:03.033537 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-cilium-run\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033583 kubelet[2952]: I0130 14:19:03.033587 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-hostproc\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033767 kubelet[2952]: I0130 14:19:03.033608 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45e677d4-c7ae-499e-bb50-1ef2b8208716-clustermesh-secrets\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033767 kubelet[2952]: I0130 14:19:03.033625 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-host-proc-sys-kernel\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033767 kubelet[2952]: I0130 14:19:03.033645 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45e677d4-c7ae-499e-bb50-1ef2b8208716-cilium-config-path\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033767 kubelet[2952]: I0130 14:19:03.033663 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45e677d4-c7ae-499e-bb50-1ef2b8208716-hubble-tls\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033767 kubelet[2952]: I0130 14:19:03.033696 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-cni-path\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033767 kubelet[2952]: I0130 14:19:03.033713 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-lib-modules\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033904 kubelet[2952]: I0130 14:19:03.033729 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-host-proc-sys-net\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033904 kubelet[2952]: I0130 14:19:03.033768 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-bpf-maps\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033904 kubelet[2952]: I0130 14:19:03.033789 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-etc-cni-netd\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033904 kubelet[2952]: I0130 14:19:03.033805 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/45e677d4-c7ae-499e-bb50-1ef2b8208716-cilium-ipsec-secrets\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033904 kubelet[2952]: I0130 14:19:03.033820 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-cilium-cgroup\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.033904 kubelet[2952]: I0130 14:19:03.033836 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45e677d4-c7ae-499e-bb50-1ef2b8208716-xtables-lock\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.034029 kubelet[2952]: I0130 14:19:03.033854 2952 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddpwq\" (UniqueName: \"kubernetes.io/projected/45e677d4-c7ae-499e-bb50-1ef2b8208716-kube-api-access-ddpwq\") pod \"cilium-8s6mt\" (UID: \"45e677d4-c7ae-499e-bb50-1ef2b8208716\") " pod="kube-system/cilium-8s6mt" Jan 30 14:19:03.138240 sshd[4694]: pam_unix(sshd:session): session closed for user core Jan 30 14:19:03.170453 systemd[1]: sshd@20-138.199.157.113:22-139.178.68.195:33180.service: Deactivated successfully. Jan 30 14:19:03.172993 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:19:03.176332 systemd-logind[1556]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:19:03.178631 systemd-logind[1556]: Removed session 21. Jan 30 14:19:03.279614 containerd[1593]: time="2025-01-30T14:19:03.279035368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8s6mt,Uid:45e677d4-c7ae-499e-bb50-1ef2b8208716,Namespace:kube-system,Attempt:0,}" Jan 30 14:19:03.305493 containerd[1593]: time="2025-01-30T14:19:03.305120092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:19:03.305493 containerd[1593]: time="2025-01-30T14:19:03.305174533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:19:03.305493 containerd[1593]: time="2025-01-30T14:19:03.305214053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:03.305493 containerd[1593]: time="2025-01-30T14:19:03.305302414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:19:03.307372 systemd[1]: Started sshd@21-138.199.157.113:22-139.178.68.195:33182.service - OpenSSH per-connection server daemon (139.178.68.195:33182). Jan 30 14:19:03.339337 containerd[1593]: time="2025-01-30T14:19:03.339297800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8s6mt,Uid:45e677d4-c7ae-499e-bb50-1ef2b8208716,Namespace:kube-system,Attempt:0,} returns sandbox id \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\"" Jan 30 14:19:03.343930 containerd[1593]: time="2025-01-30T14:19:03.343894676Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 14:19:03.355795 containerd[1593]: time="2025-01-30T14:19:03.355750969Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"25820905436e6f183542eef7ab301f474cbd5716c3cdd9da383e336019bae545\"" Jan 30 14:19:03.357279 containerd[1593]: time="2025-01-30T14:19:03.356438255Z" level=info msg="StartContainer for \"25820905436e6f183542eef7ab301f474cbd5716c3cdd9da383e336019bae545\"" Jan 30 14:19:03.409709 containerd[1593]: time="2025-01-30T14:19:03.409165548Z" level=info msg="StartContainer for \"25820905436e6f183542eef7ab301f474cbd5716c3cdd9da383e336019bae545\" returns successfully" Jan 30 14:19:03.452263 containerd[1593]: time="2025-01-30T14:19:03.452147245Z" level=info msg="shim disconnected" id=25820905436e6f183542eef7ab301f474cbd5716c3cdd9da383e336019bae545 namespace=k8s.io Jan 30 14:19:03.452536 containerd[1593]: time="2025-01-30T14:19:03.452516927Z" level=warning msg="cleaning up after shim disconnected" id=25820905436e6f183542eef7ab301f474cbd5716c3cdd9da383e336019bae545 namespace=k8s.io Jan 30 14:19:03.452609 containerd[1593]: time="2025-01-30T14:19:03.452593888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:19:04.034392 containerd[1593]: time="2025-01-30T14:19:04.034353165Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 14:19:04.055928 containerd[1593]: time="2025-01-30T14:19:04.055879133Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad893790290c05dcff7b7c08c06a5744bc3af7345fe5da6f9dbabf61fb0a2239\"" Jan 30 14:19:04.057662 containerd[1593]: time="2025-01-30T14:19:04.056936821Z" level=info msg="StartContainer for \"ad893790290c05dcff7b7c08c06a5744bc3af7345fe5da6f9dbabf61fb0a2239\"" Jan 30 14:19:04.115330 containerd[1593]: time="2025-01-30T14:19:04.114795032Z" level=info msg="StartContainer for \"ad893790290c05dcff7b7c08c06a5744bc3af7345fe5da6f9dbabf61fb0a2239\" returns successfully" Jan 30 14:19:04.156407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad893790290c05dcff7b7c08c06a5744bc3af7345fe5da6f9dbabf61fb0a2239-rootfs.mount: Deactivated successfully. Jan 30 14:19:04.164544 containerd[1593]: time="2025-01-30T14:19:04.164455619Z" level=info msg="shim disconnected" id=ad893790290c05dcff7b7c08c06a5744bc3af7345fe5da6f9dbabf61fb0a2239 namespace=k8s.io Jan 30 14:19:04.164544 containerd[1593]: time="2025-01-30T14:19:04.164536419Z" level=warning msg="cleaning up after shim disconnected" id=ad893790290c05dcff7b7c08c06a5744bc3af7345fe5da6f9dbabf61fb0a2239 namespace=k8s.io Jan 30 14:19:04.164544 containerd[1593]: time="2025-01-30T14:19:04.164550939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:19:04.291478 sshd[4726]: Accepted publickey for core from 139.178.68.195 port 33182 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:19:04.293470 sshd[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:19:04.298278 systemd-logind[1556]: New session 22 of user core. Jan 30 14:19:04.305234 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:19:04.969626 sshd[4726]: pam_unix(sshd:session): session closed for user core Jan 30 14:19:04.977583 systemd[1]: sshd@21-138.199.157.113:22-139.178.68.195:33182.service: Deactivated successfully. Jan 30 14:19:04.983121 systemd-logind[1556]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:19:04.984010 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:19:04.986179 systemd-logind[1556]: Removed session 22. Jan 30 14:19:04.988798 kubelet[2952]: E0130 14:19:04.987817 2952 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-m5fx4" podUID="9eced152-0573-4e2f-925e-28ec0717c307" Jan 30 14:19:05.042530 containerd[1593]: time="2025-01-30T14:19:05.042474897Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 14:19:05.064012 containerd[1593]: time="2025-01-30T14:19:05.063969063Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"315fb2bdc476b9eaf3009111062104029e509e50cdb74c909dde3959f3d81183\"" Jan 30 14:19:05.066217 containerd[1593]: time="2025-01-30T14:19:05.066165960Z" level=info msg="StartContainer for \"315fb2bdc476b9eaf3009111062104029e509e50cdb74c909dde3959f3d81183\"" Jan 30 14:19:05.134110 containerd[1593]: time="2025-01-30T14:19:05.133488442Z" level=info msg="StartContainer for \"315fb2bdc476b9eaf3009111062104029e509e50cdb74c909dde3959f3d81183\" returns successfully" Jan 30 14:19:05.140747 systemd[1]: Started sshd@22-138.199.157.113:22-139.178.68.195:42560.service - OpenSSH per-connection server daemon (139.178.68.195:42560). Jan 30 14:19:05.171593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-315fb2bdc476b9eaf3009111062104029e509e50cdb74c909dde3959f3d81183-rootfs.mount: Deactivated successfully. Jan 30 14:19:05.183180 containerd[1593]: time="2025-01-30T14:19:05.183104586Z" level=info msg="shim disconnected" id=315fb2bdc476b9eaf3009111062104029e509e50cdb74c909dde3959f3d81183 namespace=k8s.io Jan 30 14:19:05.183180 containerd[1593]: time="2025-01-30T14:19:05.183166386Z" level=warning msg="cleaning up after shim disconnected" id=315fb2bdc476b9eaf3009111062104029e509e50cdb74c909dde3959f3d81183 namespace=k8s.io Jan 30 14:19:05.183180 containerd[1593]: time="2025-01-30T14:19:05.183179506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:19:05.192461 kubelet[2952]: E0130 14:19:05.192416 2952 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 14:19:06.045170 containerd[1593]: time="2025-01-30T14:19:06.045116620Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 14:19:06.065665 containerd[1593]: time="2025-01-30T14:19:06.065503657Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5ba7a0817e636c2f45b03fcc79f4f61e7f921a2d23c0a9f7b64e483d46645706\"" Jan 30 14:19:06.068148 containerd[1593]: time="2025-01-30T14:19:06.066229222Z" level=info msg="StartContainer for \"5ba7a0817e636c2f45b03fcc79f4f61e7f921a2d23c0a9f7b64e483d46645706\"" Jan 30 14:19:06.123315 containerd[1593]: time="2025-01-30T14:19:06.119860995Z" level=info msg="StartContainer for \"5ba7a0817e636c2f45b03fcc79f4f61e7f921a2d23c0a9f7b64e483d46645706\" returns successfully" Jan 30 14:19:06.129349 sshd[4917]: Accepted publickey for core from 139.178.68.195 port 42560 ssh2: RSA SHA256:DIoLrEEXhDQXEcb7Sbdn55587nkBWRNvhPQHIp9FpJY Jan 30 14:19:06.131966 sshd[4917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:19:06.140285 systemd-logind[1556]: New session 23 of user core. Jan 30 14:19:06.150783 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:19:06.157272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba7a0817e636c2f45b03fcc79f4f61e7f921a2d23c0a9f7b64e483d46645706-rootfs.mount: Deactivated successfully. Jan 30 14:19:06.161036 containerd[1593]: time="2025-01-30T14:19:06.160953152Z" level=info msg="shim disconnected" id=5ba7a0817e636c2f45b03fcc79f4f61e7f921a2d23c0a9f7b64e483d46645706 namespace=k8s.io Jan 30 14:19:06.161170 containerd[1593]: time="2025-01-30T14:19:06.161044312Z" level=warning msg="cleaning up after shim disconnected" id=5ba7a0817e636c2f45b03fcc79f4f61e7f921a2d23c0a9f7b64e483d46645706 namespace=k8s.io Jan 30 14:19:06.161170 containerd[1593]: time="2025-01-30T14:19:06.161056433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:19:06.989247 kubelet[2952]: E0130 14:19:06.987458 2952 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-m5fx4" podUID="9eced152-0573-4e2f-925e-28ec0717c307" Jan 30 14:19:07.053763 containerd[1593]: time="2025-01-30T14:19:07.053699343Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 14:19:07.074996 containerd[1593]: time="2025-01-30T14:19:07.074949626Z" level=info msg="CreateContainer within sandbox \"0df23e57f634f63f4d56d5ec20bbb8310c79bc8161ce35c86d13ea7d31cfc384\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d00d049f57e0c8acfa1183ee645543c15ab9e174f30dbd91313cf2d2703b7824\"" Jan 30 14:19:07.075733 containerd[1593]: time="2025-01-30T14:19:07.075704672Z" level=info msg="StartContainer for \"d00d049f57e0c8acfa1183ee645543c15ab9e174f30dbd91313cf2d2703b7824\"" Jan 30 14:19:07.135947 containerd[1593]: time="2025-01-30T14:19:07.135595610Z" level=info msg="StartContainer for \"d00d049f57e0c8acfa1183ee645543c15ab9e174f30dbd91313cf2d2703b7824\" returns successfully" Jan 30 14:19:07.496419 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 14:19:08.828599 systemd[1]: run-containerd-runc-k8s.io-d00d049f57e0c8acfa1183ee645543c15ab9e174f30dbd91313cf2d2703b7824-runc.d05iME.mount: Deactivated successfully. Jan 30 14:19:08.991453 kubelet[2952]: E0130 14:19:08.986840 2952 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-m5fx4" podUID="9eced152-0573-4e2f-925e-28ec0717c307" Jan 30 14:19:10.466782 systemd-networkd[1241]: lxc_health: Link UP Jan 30 14:19:10.472055 systemd-networkd[1241]: lxc_health: Gained carrier Jan 30 14:19:11.312922 kubelet[2952]: I0130 14:19:11.312844 2952 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8s6mt" podStartSLOduration=9.312822945 podStartE2EDuration="9.312822945s" podCreationTimestamp="2025-01-30 14:19:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:19:08.084072187 +0000 UTC m=+353.213778308" watchObservedRunningTime="2025-01-30 14:19:11.312822945 +0000 UTC m=+356.442529066" Jan 30 14:19:12.248624 systemd-networkd[1241]: lxc_health: Gained IPv6LL Jan 30 14:19:12.889986 update_engine[1559]: I20250130 14:19:12.888595 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:19:12.889986 update_engine[1559]: I20250130 14:19:12.888845 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:19:12.889986 update_engine[1559]: I20250130 14:19:12.889055 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:19:12.891345 update_engine[1559]: E20250130 14:19:12.891182 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:19:12.891345 update_engine[1559]: I20250130 14:19:12.891301 1559 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 14:19:15.033534 containerd[1593]: time="2025-01-30T14:19:15.033377469Z" level=info msg="StopPodSandbox for \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\"" Jan 30 14:19:15.033534 containerd[1593]: time="2025-01-30T14:19:15.033522230Z" level=info msg="TearDown network for sandbox \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\" successfully" Jan 30 14:19:15.033534 containerd[1593]: time="2025-01-30T14:19:15.033538910Z" level=info msg="StopPodSandbox for \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\" returns successfully" Jan 30 14:19:15.035526 containerd[1593]: time="2025-01-30T14:19:15.034060914Z" level=info msg="RemovePodSandbox for \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\"" Jan 30 14:19:15.035526 containerd[1593]: time="2025-01-30T14:19:15.034106834Z" level=info msg="Forcibly stopping sandbox \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\"" Jan 30 14:19:15.035526 containerd[1593]: time="2025-01-30T14:19:15.034172915Z" level=info msg="TearDown network for sandbox \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\" successfully" Jan 30 14:19:15.039732 containerd[1593]: time="2025-01-30T14:19:15.039685715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:19:15.040034 containerd[1593]: time="2025-01-30T14:19:15.039978917Z" level=info msg="RemovePodSandbox \"057c9bbfeaca64d8614ca607ee29665b17a0366850dbb013ecf18e8e2b86dfd9\" returns successfully" Jan 30 14:19:15.041041 containerd[1593]: time="2025-01-30T14:19:15.040884524Z" level=info msg="StopPodSandbox for \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\"" Jan 30 14:19:15.041041 containerd[1593]: time="2025-01-30T14:19:15.040983884Z" level=info msg="TearDown network for sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" successfully" Jan 30 14:19:15.041041 containerd[1593]: time="2025-01-30T14:19:15.040998885Z" level=info msg="StopPodSandbox for \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" returns successfully" Jan 30 14:19:15.041817 containerd[1593]: time="2025-01-30T14:19:15.041733650Z" level=info msg="RemovePodSandbox for \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\"" Jan 30 14:19:15.041817 containerd[1593]: time="2025-01-30T14:19:15.041770170Z" level=info msg="Forcibly stopping sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\"" Jan 30 14:19:15.042014 containerd[1593]: time="2025-01-30T14:19:15.041832491Z" level=info msg="TearDown network for sandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" successfully" Jan 30 14:19:15.047052 containerd[1593]: time="2025-01-30T14:19:15.046857567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:19:15.047052 containerd[1593]: time="2025-01-30T14:19:15.046934568Z" level=info msg="RemovePodSandbox \"92cffb5d2d535dbdaaa926ded0587e196b4173af60c0c29e9786dbc7a6536d67\" returns successfully" Jan 30 14:19:17.548837 systemd[1]: run-containerd-runc-k8s.io-d00d049f57e0c8acfa1183ee645543c15ab9e174f30dbd91313cf2d2703b7824-runc.b3aX9D.mount: Deactivated successfully. Jan 30 14:19:17.774742 sshd[4917]: pam_unix(sshd:session): session closed for user core Jan 30 14:19:17.778631 systemd[1]: sshd@22-138.199.157.113:22-139.178.68.195:42560.service: Deactivated successfully. Jan 30 14:19:17.778854 systemd-logind[1556]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:19:17.784114 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:19:17.786003 systemd-logind[1556]: Removed session 23. Jan 30 14:19:22.882555 update_engine[1559]: I20250130 14:19:22.882086 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:19:22.882555 update_engine[1559]: I20250130 14:19:22.882489 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:19:22.883816 update_engine[1559]: I20250130 14:19:22.882751 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:19:22.883816 update_engine[1559]: E20250130 14:19:22.883725 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:19:22.883816 update_engine[1559]: I20250130 14:19:22.883793 1559 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:19:22.883816 update_engine[1559]: I20250130 14:19:22.883805 1559 omaha_request_action.cc:617] Omaha request response: Jan 30 14:19:22.884380 update_engine[1559]: E20250130 14:19:22.883899 1559 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.883924 1559 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.883932 1559 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.883941 1559 update_attempter.cc:306] Processing Done. Jan 30 14:19:22.884380 update_engine[1559]: E20250130 14:19:22.883958 1559 update_attempter.cc:619] Update failed. Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.883966 1559 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.883973 1559 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.883981 1559 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.884061 1559 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.884089 1559 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.884096 1559 omaha_request_action.cc:272] Request: Jan 30 14:19:22.884380 update_engine[1559]: Jan 30 14:19:22.884380 update_engine[1559]: Jan 30 14:19:22.884380 update_engine[1559]: Jan 30 14:19:22.884380 update_engine[1559]: Jan 30 14:19:22.884380 update_engine[1559]: Jan 30 14:19:22.884380 update_engine[1559]: Jan 30 14:19:22.884380 update_engine[1559]: I20250130 14:19:22.884105 1559 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:19:22.885621 update_engine[1559]: I20250130 14:19:22.884500 1559 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:19:22.885621 update_engine[1559]: I20250130 14:19:22.884768 1559 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:19:22.886988 locksmithd[1603]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 14:19:22.887372 update_engine[1559]: E20250130 14:19:22.886656 1559 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:19:22.887372 update_engine[1559]: I20250130 14:19:22.886714 1559 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:19:22.887372 update_engine[1559]: I20250130 14:19:22.886723 1559 omaha_request_action.cc:617] Omaha request response: Jan 30 14:19:22.887372 update_engine[1559]: I20250130 14:19:22.886731 1559 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:19:22.887372 update_engine[1559]: I20250130 14:19:22.886739 1559 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:19:22.887372 update_engine[1559]: I20250130 14:19:22.886746 1559 update_attempter.cc:306] Processing Done. Jan 30 14:19:22.887372 update_engine[1559]: I20250130 14:19:22.886754 1559 update_attempter.cc:310] Error event sent. Jan 30 14:19:22.887372 update_engine[1559]: I20250130 14:19:22.886763 1559 update_check_scheduler.cc:74] Next update check in 49m23s Jan 30 14:19:22.888057 locksmithd[1603]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0