Jan 29 12:04:33.870881 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 12:04:33.870904 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 12:04:33.870914 kernel: KASLR enabled Jan 29 12:04:33.870919 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 12:04:33.870925 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jan 29 12:04:33.870931 kernel: random: crng init done Jan 29 12:04:33.870938 kernel: ACPI: Early table checksum verification disabled Jan 29 12:04:33.870944 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 12:04:33.870950 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 12:04:33.870958 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.870964 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.870970 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.870976 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.870982 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.870990 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.870998 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.871004 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.871011 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:04:33.871017 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 12:04:33.871024 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 12:04:33.871030 kernel: NUMA: Failed to initialise from firmware Jan 29 12:04:33.871037 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 12:04:33.871043 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 12:04:33.871049 kernel: Zone ranges: Jan 29 12:04:33.871056 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 12:04:33.871063 kernel: DMA32 empty Jan 29 12:04:33.871103 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 12:04:33.871110 kernel: Movable zone start for each node Jan 29 12:04:33.871116 kernel: Early memory node ranges Jan 29 12:04:33.871123 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 29 12:04:33.871129 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 12:04:33.871136 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 12:04:33.871142 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 12:04:33.871174 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 12:04:33.871195 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 12:04:33.871201 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 12:04:33.871208 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 12:04:33.871218 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 12:04:33.871224 kernel: psci: probing for conduit method from ACPI. Jan 29 12:04:33.871231 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 12:04:33.871240 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 12:04:33.871247 kernel: psci: Trusted OS migration not required Jan 29 12:04:33.871254 kernel: psci: SMC Calling Convention v1.1 Jan 29 12:04:33.871262 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 12:04:33.871269 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 12:04:33.871276 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 12:04:33.871283 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 12:04:33.871289 kernel: Detected PIPT I-cache on CPU0 Jan 29 12:04:33.871296 kernel: CPU features: detected: GIC system register CPU interface Jan 29 12:04:33.871303 kernel: CPU features: detected: Hardware dirty bit management Jan 29 12:04:33.871309 kernel: CPU features: detected: Spectre-v4 Jan 29 12:04:33.871316 kernel: CPU features: detected: Spectre-BHB Jan 29 12:04:33.871323 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 12:04:33.871331 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 12:04:33.871338 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 12:04:33.871345 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 12:04:33.871351 kernel: alternatives: applying boot alternatives Jan 29 12:04:33.871359 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:04:33.871366 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:04:33.871373 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:04:33.871380 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:04:33.871387 kernel: Fallback order for Node 0: 0 Jan 29 12:04:33.871394 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 12:04:33.871400 kernel: Policy zone: Normal Jan 29 12:04:33.871409 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:04:33.871416 kernel: software IO TLB: area num 2. Jan 29 12:04:33.871422 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 12:04:33.871430 kernel: Memory: 3882936K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 213064K reserved, 0K cma-reserved) Jan 29 12:04:33.871437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:04:33.871443 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:04:33.871451 kernel: rcu: RCU event tracing is enabled. Jan 29 12:04:33.871458 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:04:33.871464 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:04:33.871471 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:04:33.871478 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:04:33.871486 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:04:33.871493 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 12:04:33.871500 kernel: GICv3: 256 SPIs implemented Jan 29 12:04:33.871507 kernel: GICv3: 0 Extended SPIs implemented Jan 29 12:04:33.871513 kernel: Root IRQ handler: gic_handle_irq Jan 29 12:04:33.871520 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 12:04:33.871527 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 12:04:33.871541 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 12:04:33.871549 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 12:04:33.871556 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 12:04:33.871563 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 12:04:33.871570 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 12:04:33.871580 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:04:33.871587 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:04:33.871594 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 12:04:33.871601 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 12:04:33.871608 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 12:04:33.871615 kernel: Console: colour dummy device 80x25 Jan 29 12:04:33.871622 kernel: ACPI: Core revision 20230628 Jan 29 12:04:33.871629 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 12:04:33.871636 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:04:33.871644 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:04:33.871652 kernel: landlock: Up and running. Jan 29 12:04:33.871659 kernel: SELinux: Initializing. Jan 29 12:04:33.871666 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:04:33.871673 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:04:33.871680 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:04:33.871687 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:04:33.871694 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:04:33.871701 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:04:33.871708 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 12:04:33.871717 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 12:04:33.871723 kernel: Remapping and enabling EFI services. Jan 29 12:04:33.871730 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:04:33.871737 kernel: Detected PIPT I-cache on CPU1 Jan 29 12:04:33.871744 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 12:04:33.871751 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 12:04:33.871758 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:04:33.871765 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 12:04:33.871772 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:04:33.871779 kernel: SMP: Total of 2 processors activated. Jan 29 12:04:33.871796 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 12:04:33.871803 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 12:04:33.871815 kernel: CPU features: detected: Common not Private translations Jan 29 12:04:33.871824 kernel: CPU features: detected: CRC32 instructions Jan 29 12:04:33.871832 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 12:04:33.871839 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 12:04:33.871846 kernel: CPU features: detected: LSE atomic instructions Jan 29 12:04:33.871854 kernel: CPU features: detected: Privileged Access Never Jan 29 12:04:33.871861 kernel: CPU features: detected: RAS Extension Support Jan 29 12:04:33.871870 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 12:04:33.871877 kernel: CPU: All CPU(s) started at EL1 Jan 29 12:04:33.871885 kernel: alternatives: applying system-wide alternatives Jan 29 12:04:33.871892 kernel: devtmpfs: initialized Jan 29 12:04:33.871899 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:04:33.871907 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:04:33.871914 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:04:33.871923 kernel: SMBIOS 3.0.0 present. Jan 29 12:04:33.871931 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 12:04:33.871938 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:04:33.871946 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 12:04:33.871953 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 12:04:33.871961 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 12:04:33.871968 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:04:33.871975 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Jan 29 12:04:33.871982 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:04:33.871992 kernel: cpuidle: using governor menu Jan 29 12:04:33.871999 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 12:04:33.872007 kernel: ASID allocator initialised with 32768 entries Jan 29 12:04:33.872014 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:04:33.872021 kernel: Serial: AMBA PL011 UART driver Jan 29 12:04:33.872029 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 12:04:33.872036 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 12:04:33.872043 kernel: Modules: 509040 pages in range for PLT usage Jan 29 12:04:33.872051 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:04:33.872060 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:04:33.872074 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 12:04:33.872082 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 12:04:33.872089 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:04:33.872097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:04:33.872104 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 12:04:33.872111 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 12:04:33.872119 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:04:33.872126 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:04:33.872135 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:04:33.872142 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:04:33.872512 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:04:33.872521 kernel: ACPI: Interpreter enabled Jan 29 12:04:33.872528 kernel: ACPI: Using GIC for interrupt routing Jan 29 12:04:33.872536 kernel: ACPI: MCFG table detected, 1 entries Jan 29 12:04:33.872543 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 12:04:33.872551 kernel: printk: console [ttyAMA0] enabled Jan 29 12:04:33.872558 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:04:33.872716 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:04:33.872791 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 12:04:33.872857 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 12:04:33.872923 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 12:04:33.872987 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 12:04:33.872996 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 12:04:33.873004 kernel: PCI host bridge to bus 0000:00 Jan 29 12:04:33.873121 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 12:04:33.873595 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 12:04:33.873670 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 12:04:33.873727 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:04:33.873815 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 12:04:33.873893 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 12:04:33.873969 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 12:04:33.874036 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 12:04:33.874132 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.874273 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 12:04:33.874350 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.874418 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 12:04:33.874489 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.874563 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 12:04:33.874636 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.874702 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 12:04:33.874775 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.874843 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 12:04:33.874916 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.874986 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 12:04:33.875110 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.875231 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 12:04:33.875310 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.875375 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 12:04:33.875446 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 12:04:33.875517 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 12:04:33.875588 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 12:04:33.875653 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 12:04:33.875729 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 12:04:33.875799 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 12:04:33.875869 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 12:04:33.875940 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 12:04:33.876017 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 12:04:33.876136 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 12:04:33.878323 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 12:04:33.878400 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 12:04:33.878469 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 12:04:33.878545 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 12:04:33.878623 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 12:04:33.878710 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 12:04:33.878786 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 12:04:33.878856 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 12:04:33.878933 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 12:04:33.879004 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 12:04:33.879091 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 12:04:33.879210 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 12:04:33.879295 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 12:04:33.879369 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 12:04:33.879440 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 12:04:33.879512 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 12:04:33.879580 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 12:04:33.879651 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 12:04:33.879720 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 12:04:33.879787 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 12:04:33.879853 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 12:04:33.879923 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 12:04:33.879993 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 12:04:33.880059 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 12:04:33.880203 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 12:04:33.880288 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 12:04:33.880363 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 12:04:33.880430 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 12:04:33.880495 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 12:04:33.880559 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 12:04:33.880626 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 12:04:33.880696 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 12:04:33.880760 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 12:04:33.880827 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 12:04:33.880892 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 12:04:33.880957 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 12:04:33.881026 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 12:04:33.881132 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 12:04:33.883337 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 12:04:33.883430 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 12:04:33.883496 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 12:04:33.883562 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 12:04:33.883633 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 12:04:33.883701 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 12:04:33.883772 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 12:04:33.883840 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 12:04:33.883914 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 12:04:33.883980 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 12:04:33.884048 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 12:04:33.884138 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 12:04:33.884223 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 12:04:33.884298 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 12:04:33.884367 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 12:04:33.884438 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 12:04:33.884506 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 12:04:33.884573 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 12:04:33.884641 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 12:04:33.884707 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 12:04:33.884774 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 12:04:33.884843 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 12:04:33.884924 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 12:04:33.884994 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 12:04:33.885062 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 12:04:33.885401 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 12:04:33.885497 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 12:04:33.885566 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 12:04:33.885635 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 12:04:33.885710 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 12:04:33.885777 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 12:04:33.885844 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 12:04:33.885914 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 12:04:33.885984 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 12:04:33.886053 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 12:04:33.886287 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 12:04:33.886367 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 12:04:33.886442 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 12:04:33.886511 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 12:04:33.886578 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 12:04:33.886646 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 12:04:33.886712 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 12:04:33.886784 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 12:04:33.886860 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 12:04:33.886928 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 12:04:33.886999 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 12:04:33.887064 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 12:04:33.887175 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 12:04:33.887251 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 12:04:33.887319 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 12:04:33.887392 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 12:04:33.887464 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 12:04:33.887540 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 12:04:33.887609 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 12:04:33.887675 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 12:04:33.887749 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 12:04:33.887826 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 12:04:33.887897 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 12:04:33.887963 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 12:04:33.888028 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 12:04:33.888106 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 12:04:33.888265 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 12:04:33.888339 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 12:04:33.888402 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 12:04:33.888465 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 12:04:33.888535 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 12:04:33.888610 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 12:04:33.888677 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 12:04:33.888741 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 12:04:33.888805 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 12:04:33.888867 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 12:04:33.888930 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 12:04:33.889001 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 12:04:33.889115 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 12:04:33.889209 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 12:04:33.889279 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 12:04:33.889348 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 12:04:33.889414 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 12:04:33.889489 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 12:04:33.889559 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 12:04:33.889630 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 12:04:33.889702 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 12:04:33.889769 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 12:04:33.889835 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 12:04:33.889902 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 12:04:33.889968 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 12:04:33.890032 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 12:04:33.890113 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 12:04:33.890207 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 12:04:33.890282 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 12:04:33.890348 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 12:04:33.890414 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 12:04:33.890480 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 12:04:33.890547 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 12:04:33.890606 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 12:04:33.890665 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 12:04:33.890737 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 12:04:33.890804 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 12:04:33.890866 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 12:04:33.890935 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 12:04:33.890997 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 12:04:33.891057 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 12:04:33.894305 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 12:04:33.894409 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 12:04:33.894474 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 12:04:33.894553 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 12:04:33.894617 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 12:04:33.894676 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 12:04:33.894744 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 12:04:33.894807 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 12:04:33.894870 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 12:04:33.894942 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 12:04:33.895003 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 12:04:33.895066 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 12:04:33.895180 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 12:04:33.895247 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 12:04:33.895307 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 12:04:33.895375 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 12:04:33.895435 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 12:04:33.895494 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 12:04:33.895564 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 12:04:33.895629 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 12:04:33.895690 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 12:04:33.895700 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 12:04:33.895708 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 12:04:33.895716 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 12:04:33.895724 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 12:04:33.895732 kernel: iommu: Default domain type: Translated Jan 29 12:04:33.895740 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 12:04:33.895750 kernel: efivars: Registered efivars operations Jan 29 12:04:33.895758 kernel: vgaarb: loaded Jan 29 12:04:33.895766 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 12:04:33.895774 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:04:33.895782 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:04:33.895790 kernel: pnp: PnP ACPI init Jan 29 12:04:33.895862 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 12:04:33.895873 kernel: pnp: PnP ACPI: found 1 devices Jan 29 12:04:33.895884 kernel: NET: Registered PF_INET protocol family Jan 29 12:04:33.895892 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:04:33.895900 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:04:33.895908 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:04:33.895916 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:04:33.895923 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:04:33.895932 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:04:33.895940 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:04:33.895947 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:04:33.895957 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:04:33.896033 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 12:04:33.896046 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:04:33.896054 kernel: kvm [1]: HYP mode not available Jan 29 12:04:33.896062 kernel: Initialise system trusted keyrings Jan 29 12:04:33.896104 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:04:33.896113 kernel: Key type asymmetric registered Jan 29 12:04:33.896121 kernel: Asymmetric key parser 'x509' registered Jan 29 12:04:33.896129 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 12:04:33.896141 kernel: io scheduler mq-deadline registered Jan 29 12:04:33.899803 kernel: io scheduler kyber registered Jan 29 12:04:33.899821 kernel: io scheduler bfq registered Jan 29 12:04:33.899830 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 12:04:33.901352 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 12:04:33.901444 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 12:04:33.901515 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.901600 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 12:04:33.901672 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 12:04:33.901742 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.901815 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 12:04:33.901887 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 12:04:33.901957 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.902036 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 12:04:33.902129 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 12:04:33.902235 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.902315 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 12:04:33.902386 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 12:04:33.902456 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.902535 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 12:04:33.902606 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 12:04:33.902789 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.902888 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 12:04:33.902958 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 12:04:33.903026 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.903173 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 12:04:33.903253 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 12:04:33.903321 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.903332 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 12:04:33.903401 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 12:04:33.903467 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 12:04:33.903535 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 12:04:33.903551 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 12:04:33.903559 kernel: ACPI: button: Power Button [PWRB] Jan 29 12:04:33.903568 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 12:04:33.903718 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 12:04:33.903808 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 12:04:33.903820 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:04:33.903828 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 12:04:33.903900 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 12:04:33.903915 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 12:04:33.903923 kernel: thunder_xcv, ver 1.0 Jan 29 12:04:33.903931 kernel: thunder_bgx, ver 1.0 Jan 29 12:04:33.903939 kernel: nicpf, ver 1.0 Jan 29 12:04:33.903946 kernel: nicvf, ver 1.0 Jan 29 12:04:33.904035 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 12:04:33.904118 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T12:04:33 UTC (1738152273) Jan 29 12:04:33.904131 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:04:33.904142 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 12:04:33.904161 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 12:04:33.904183 kernel: watchdog: Hard watchdog permanently disabled Jan 29 12:04:33.904191 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:04:33.904200 kernel: Segment Routing with IPv6 Jan 29 12:04:33.904208 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:04:33.904215 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:04:33.904223 kernel: Key type dns_resolver registered Jan 29 12:04:33.904231 kernel: registered taskstats version 1 Jan 29 12:04:33.904242 kernel: Loading compiled-in X.509 certificates Jan 29 12:04:33.904250 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 12:04:33.904258 kernel: Key type .fscrypt registered Jan 29 12:04:33.904265 kernel: Key type fscrypt-provisioning registered Jan 29 12:04:33.904273 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:04:33.904281 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:04:33.904289 kernel: ima: No architecture policies found Jan 29 12:04:33.904296 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 12:04:33.904305 kernel: clk: Disabling unused clocks Jan 29 12:04:33.904314 kernel: Freeing unused kernel memory: 39360K Jan 29 12:04:33.904322 kernel: Run /init as init process Jan 29 12:04:33.904330 kernel: with arguments: Jan 29 12:04:33.904340 kernel: /init Jan 29 12:04:33.904348 kernel: with environment: Jan 29 12:04:33.904355 kernel: HOME=/ Jan 29 12:04:33.904363 kernel: TERM=linux Jan 29 12:04:33.904370 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:04:33.904380 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:04:33.904392 systemd[1]: Detected virtualization kvm. Jan 29 12:04:33.904401 systemd[1]: Detected architecture arm64. Jan 29 12:04:33.904408 systemd[1]: Running in initrd. Jan 29 12:04:33.904416 systemd[1]: No hostname configured, using default hostname. Jan 29 12:04:33.904424 systemd[1]: Hostname set to . Jan 29 12:04:33.904433 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:04:33.904441 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:04:33.904451 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:04:33.904460 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:04:33.904469 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:04:33.904478 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:04:33.904486 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:04:33.904495 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:04:33.904505 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:04:33.904515 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:04:33.904523 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:04:33.904532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:04:33.904540 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:04:33.904548 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:04:33.904556 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:04:33.904565 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:04:33.904573 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:04:33.904583 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:04:33.904591 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:04:33.904600 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:04:33.904608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:04:33.904617 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:04:33.904625 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:04:33.904634 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:04:33.904642 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:04:33.904650 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:04:33.904661 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:04:33.904669 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:04:33.904677 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:04:33.904686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:04:33.904694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:04:33.904703 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:04:33.904711 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:04:33.904744 systemd-journald[237]: Collecting audit messages is disabled. Jan 29 12:04:33.904767 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:04:33.904778 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:04:33.904787 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:04:33.904795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:33.904803 kernel: Bridge firewalling registered Jan 29 12:04:33.904813 systemd-journald[237]: Journal started Jan 29 12:04:33.904833 systemd-journald[237]: Runtime Journal (/run/log/journal/2941ac54bf2945c38ac9a7911a952ee2) is 8.0M, max 76.6M, 68.6M free. Jan 29 12:04:33.909206 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:04:33.884252 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 12:04:33.905218 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 12:04:33.914181 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:04:33.914227 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:04:33.913968 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:04:33.922392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:04:33.926263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:04:33.928474 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:04:33.937176 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:04:33.939599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:04:33.945343 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:04:33.950548 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:04:33.954531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:04:33.958056 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:04:33.966102 dracut-cmdline[269]: dracut-dracut-053 Jan 29 12:04:33.969648 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:04:33.996794 systemd-resolved[276]: Positive Trust Anchors: Jan 29 12:04:33.996817 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:04:33.996849 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:04:34.001935 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 29 12:04:34.003184 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:04:34.007876 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:04:34.059208 kernel: SCSI subsystem initialized Jan 29 12:04:34.064185 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:04:34.072222 kernel: iscsi: registered transport (tcp) Jan 29 12:04:34.085199 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:04:34.085251 kernel: QLogic iSCSI HBA Driver Jan 29 12:04:34.135726 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:04:34.143467 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:04:34.165594 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:04:34.165657 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:04:34.165669 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:04:34.218202 kernel: raid6: neonx8 gen() 15672 MB/s Jan 29 12:04:34.235208 kernel: raid6: neonx4 gen() 15602 MB/s Jan 29 12:04:34.252214 kernel: raid6: neonx2 gen() 13199 MB/s Jan 29 12:04:34.269213 kernel: raid6: neonx1 gen() 10437 MB/s Jan 29 12:04:34.286235 kernel: raid6: int64x8 gen() 6928 MB/s Jan 29 12:04:34.303238 kernel: raid6: int64x4 gen() 7313 MB/s Jan 29 12:04:34.320236 kernel: raid6: int64x2 gen() 6106 MB/s Jan 29 12:04:34.337219 kernel: raid6: int64x1 gen() 5043 MB/s Jan 29 12:04:34.337310 kernel: raid6: using algorithm neonx8 gen() 15672 MB/s Jan 29 12:04:34.354217 kernel: raid6: .... xor() 11879 MB/s, rmw enabled Jan 29 12:04:34.354296 kernel: raid6: using neon recovery algorithm Jan 29 12:04:34.359313 kernel: xor: measuring software checksum speed Jan 29 12:04:34.359384 kernel: 8regs : 19721 MB/sec Jan 29 12:04:34.359404 kernel: 32regs : 19636 MB/sec Jan 29 12:04:34.359436 kernel: arm64_neon : 26954 MB/sec Jan 29 12:04:34.360179 kernel: xor: using function: arm64_neon (26954 MB/sec) Jan 29 12:04:34.410195 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:04:34.425287 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:04:34.431322 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:04:34.444486 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 29 12:04:34.447904 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:04:34.456014 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:04:34.473034 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 29 12:04:34.510322 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:04:34.517412 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:04:34.566328 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:04:34.574426 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:04:34.593848 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:04:34.596164 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:04:34.596795 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:04:34.597854 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:04:34.608326 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:04:34.626407 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:04:34.671208 kernel: ACPI: bus type USB registered Jan 29 12:04:34.673177 kernel: usbcore: registered new interface driver usbfs Jan 29 12:04:34.673217 kernel: usbcore: registered new interface driver hub Jan 29 12:04:34.677183 kernel: scsi host0: Virtio SCSI HBA Jan 29 12:04:34.678774 kernel: usbcore: registered new device driver usb Jan 29 12:04:34.681176 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 12:04:34.682203 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 12:04:34.689818 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:04:34.689939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:04:34.693219 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:04:34.694738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:04:34.694930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:34.696510 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:04:34.703608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:04:34.712183 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 12:04:34.736987 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 12:04:34.737192 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 12:04:34.737308 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 12:04:34.737413 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 12:04:34.737498 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 12:04:34.737590 kernel: hub 1-0:1.0: USB hub found Jan 29 12:04:34.737700 kernel: hub 1-0:1.0: 4 ports detected Jan 29 12:04:34.737786 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 12:04:34.737880 kernel: hub 2-0:1.0: USB hub found Jan 29 12:04:34.737970 kernel: hub 2-0:1.0: 4 ports detected Jan 29 12:04:34.738048 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 12:04:34.739270 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 12:04:34.739384 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 12:04:34.739395 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 12:04:34.721401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:34.728400 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:04:34.755727 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:04:34.759385 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 12:04:34.767961 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 12:04:34.768122 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 12:04:34.768241 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 12:04:34.768326 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 12:04:34.768425 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:04:34.768435 kernel: GPT:17805311 != 80003071 Jan 29 12:04:34.768445 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:04:34.768455 kernel: GPT:17805311 != 80003071 Jan 29 12:04:34.768464 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:04:34.768474 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:04:34.768484 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 12:04:34.804552 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (507) Jan 29 12:04:34.807203 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (513) Jan 29 12:04:34.808480 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 12:04:34.826376 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 12:04:34.830789 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 12:04:34.832220 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 12:04:34.841037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 12:04:34.854578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:04:34.865479 disk-uuid[573]: Primary Header is updated. Jan 29 12:04:34.865479 disk-uuid[573]: Secondary Entries is updated. Jan 29 12:04:34.865479 disk-uuid[573]: Secondary Header is updated. Jan 29 12:04:34.873188 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:04:34.876174 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:04:34.881183 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:04:34.975391 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 12:04:35.217219 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 12:04:35.356173 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 12:04:35.356239 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 12:04:35.356501 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 12:04:35.411305 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 12:04:35.411636 kernel: usbcore: registered new interface driver usbhid Jan 29 12:04:35.411656 kernel: usbhid: USB HID core driver Jan 29 12:04:35.885183 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:04:35.886220 disk-uuid[574]: The operation has completed successfully. Jan 29 12:04:35.941604 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:04:35.941727 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:04:35.955413 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:04:35.962399 sh[592]: Success Jan 29 12:04:35.976179 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 12:04:36.026229 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:04:36.033379 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:04:36.036161 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:04:36.060649 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 12:04:36.060722 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:04:36.060746 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:04:36.060779 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:04:36.061539 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:04:36.067192 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:04:36.069250 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:04:36.069873 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:04:36.081443 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:04:36.086123 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:04:36.099666 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:04:36.099725 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:04:36.099738 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:04:36.104184 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:04:36.104245 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:04:36.117009 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:04:36.117806 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:04:36.123882 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:04:36.131590 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:04:36.221813 ignition[688]: Ignition 2.19.0 Jan 29 12:04:36.221823 ignition[688]: Stage: fetch-offline Jan 29 12:04:36.221861 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:36.221869 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 12:04:36.222028 ignition[688]: parsed url from cmdline: "" Jan 29 12:04:36.222032 ignition[688]: no config URL provided Jan 29 12:04:36.222036 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:04:36.222043 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:04:36.222048 ignition[688]: failed to fetch config: resource requires networking Jan 29 12:04:36.226680 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:04:36.222452 ignition[688]: Ignition finished successfully Jan 29 12:04:36.233706 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:04:36.240464 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:04:36.260513 systemd-networkd[781]: lo: Link UP Jan 29 12:04:36.260526 systemd-networkd[781]: lo: Gained carrier Jan 29 12:04:36.261981 systemd-networkd[781]: Enumeration completed Jan 29 12:04:36.262300 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:04:36.262901 systemd[1]: Reached target network.target - Network. Jan 29 12:04:36.264522 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:36.264525 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:04:36.265233 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:36.265236 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:04:36.265858 systemd-networkd[781]: eth0: Link UP Jan 29 12:04:36.265861 systemd-networkd[781]: eth0: Gained carrier Jan 29 12:04:36.265868 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:36.270459 systemd-networkd[781]: eth1: Link UP Jan 29 12:04:36.270463 systemd-networkd[781]: eth1: Gained carrier Jan 29 12:04:36.270470 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:36.271413 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:04:36.287366 ignition[784]: Ignition 2.19.0 Jan 29 12:04:36.287520 ignition[784]: Stage: fetch Jan 29 12:04:36.287730 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:36.287742 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 12:04:36.287849 ignition[784]: parsed url from cmdline: "" Jan 29 12:04:36.287852 ignition[784]: no config URL provided Jan 29 12:04:36.287857 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:04:36.287866 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:04:36.287886 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 12:04:36.288493 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 12:04:36.298245 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:04:36.320278 systemd-networkd[781]: eth0: DHCPv4 address 159.69.53.160/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 12:04:36.488748 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 12:04:36.497137 ignition[784]: GET result: OK Jan 29 12:04:36.497326 ignition[784]: parsing config with SHA512: ac57ad2bb31fd8ed0b5073e33d3e0f04027a3a342a378e9894c375653a73d0bf081411b92341c32c0ada52d7b3b850a2c1f3c1192e550138a1e51ef5bca1b7f0 Jan 29 12:04:36.503480 unknown[784]: fetched base config from "system" Jan 29 12:04:36.503491 unknown[784]: fetched base config from "system" Jan 29 12:04:36.503924 ignition[784]: fetch: fetch complete Jan 29 12:04:36.503496 unknown[784]: fetched user config from "hetzner" Jan 29 12:04:36.503929 ignition[784]: fetch: fetch passed Jan 29 12:04:36.503970 ignition[784]: Ignition finished successfully Jan 29 12:04:36.508075 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:04:36.519439 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:04:36.533325 ignition[791]: Ignition 2.19.0 Jan 29 12:04:36.533346 ignition[791]: Stage: kargs Jan 29 12:04:36.533711 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:36.533733 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 12:04:36.536168 ignition[791]: kargs: kargs passed Jan 29 12:04:36.536308 ignition[791]: Ignition finished successfully Jan 29 12:04:36.540187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:04:36.545392 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:04:36.559936 ignition[797]: Ignition 2.19.0 Jan 29 12:04:36.559951 ignition[797]: Stage: disks Jan 29 12:04:36.560747 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:36.560764 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 12:04:36.561745 ignition[797]: disks: disks passed Jan 29 12:04:36.564237 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:04:36.561792 ignition[797]: Ignition finished successfully Jan 29 12:04:36.565254 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:04:36.565958 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:04:36.566630 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:04:36.567136 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:04:36.567639 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:04:36.574531 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:04:36.590204 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:04:36.595689 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:04:36.607363 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:04:36.660178 kernel: EXT4-fs (sda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 12:04:36.660743 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:04:36.662328 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:04:36.668288 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:04:36.672328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:04:36.676368 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 12:04:36.678520 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:04:36.680669 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:04:36.685230 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (813) Jan 29 12:04:36.685262 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:04:36.685272 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:04:36.685282 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:04:36.686112 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:04:36.689259 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:04:36.689309 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:04:36.695646 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:04:36.701684 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:04:36.750565 coreos-metadata[815]: Jan 29 12:04:36.750 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 12:04:36.751868 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:04:36.753336 coreos-metadata[815]: Jan 29 12:04:36.752 INFO Fetch successful Jan 29 12:04:36.754847 coreos-metadata[815]: Jan 29 12:04:36.753 INFO wrote hostname ci-4081-3-0-2-f17d477515 to /sysroot/etc/hostname Jan 29 12:04:36.757805 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:04:36.759826 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:04:36.764171 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:04:36.769444 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:04:36.860825 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:04:36.866257 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:04:36.868291 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:04:36.877228 kernel: BTRFS info (device sda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:04:36.898869 ignition[930]: INFO : Ignition 2.19.0 Jan 29 12:04:36.898869 ignition[930]: INFO : Stage: mount Jan 29 12:04:36.898869 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:36.898869 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 12:04:36.902089 ignition[930]: INFO : mount: mount passed Jan 29 12:04:36.902089 ignition[930]: INFO : Ignition finished successfully Jan 29 12:04:36.902789 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:04:36.910432 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:04:36.913323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:04:37.061466 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:04:37.068458 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:04:37.079191 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Jan 29 12:04:37.080708 kernel: BTRFS info (device sda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:04:37.080744 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:04:37.080775 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:04:37.084185 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:04:37.084231 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:04:37.087517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:04:37.111641 ignition[959]: INFO : Ignition 2.19.0 Jan 29 12:04:37.111641 ignition[959]: INFO : Stage: files Jan 29 12:04:37.112793 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:37.112793 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 12:04:37.114570 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:04:37.116201 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:04:37.116201 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:04:37.119673 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:04:37.121158 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:04:37.121158 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:04:37.120085 unknown[959]: wrote ssh authorized keys file for user: core Jan 29 12:04:37.124310 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 12:04:37.124310 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 12:04:37.204679 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:04:37.335292 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 12:04:37.335292 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:04:37.337380 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 12:04:37.888400 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:04:37.973407 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:04:37.973407 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:04:37.975969 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 12:04:37.985440 systemd-networkd[781]: eth1: Gained IPv6LL Jan 29 12:04:38.049387 systemd-networkd[781]: eth0: Gained IPv6LL Jan 29 12:04:38.487038 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:04:38.766351 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:04:38.766351 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:04:38.769782 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:04:38.769782 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:04:38.769782 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:04:38.769782 ignition[959]: INFO : files: files passed Jan 29 12:04:38.769782 ignition[959]: INFO : Ignition finished successfully Jan 29 12:04:38.770446 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:04:38.777859 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:04:38.778994 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:04:38.787430 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:04:38.788194 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:04:38.794834 initrd-setup-root-after-ignition[987]: grep: Jan 29 12:04:38.794834 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:38.796981 initrd-setup-root-after-ignition[987]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:38.796981 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:04:38.799968 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:04:38.801298 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:04:38.808471 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:04:38.843848 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:04:38.844104 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:04:38.846932 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:04:38.848449 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:04:38.849803 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:04:38.851363 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:04:38.872777 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:04:38.881444 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:04:38.902603 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:04:38.904055 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:04:38.906545 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:04:38.907895 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:04:38.908379 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:04:38.911200 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:04:38.911971 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:04:38.912894 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:04:38.913759 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:04:38.914758 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:04:38.915802 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:04:38.916747 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:04:38.917760 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:04:38.918772 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:04:38.919669 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:04:38.920436 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:04:38.920605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:04:38.921730 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:04:38.922790 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:04:38.923790 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:04:38.924770 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:04:38.925511 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:04:38.925632 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:04:38.927439 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:04:38.927549 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:04:38.928587 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:04:38.928676 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:04:38.929716 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 12:04:38.929807 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:04:38.937514 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:04:38.942752 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:04:38.944258 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:04:38.944409 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:04:38.945087 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:04:38.946561 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:04:38.954651 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:04:38.954834 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:04:38.962293 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:04:38.964001 ignition[1011]: INFO : Ignition 2.19.0 Jan 29 12:04:38.964947 ignition[1011]: INFO : Stage: umount Jan 29 12:04:38.965634 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:04:38.965634 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 12:04:38.967066 ignition[1011]: INFO : umount: umount passed Jan 29 12:04:38.967066 ignition[1011]: INFO : Ignition finished successfully Jan 29 12:04:38.969389 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:04:38.969569 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:04:38.970692 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:04:38.970742 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:04:38.971659 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:04:38.971706 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:04:38.972405 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:04:38.972451 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:04:38.973938 systemd[1]: Stopped target network.target - Network. Jan 29 12:04:38.974698 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:04:38.974757 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:04:38.975679 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:04:38.976459 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:04:38.980218 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:04:38.982388 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:04:38.983444 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:04:38.984903 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:04:38.984980 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:04:38.986321 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:04:38.986355 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:04:38.987176 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:04:38.987224 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:04:38.988291 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:04:38.988330 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:04:38.989308 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:04:38.990731 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:04:38.991811 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:04:38.991907 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:04:38.992932 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:04:38.993013 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:04:38.998239 systemd-networkd[781]: eth1: DHCPv6 lease lost Jan 29 12:04:39.001335 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:04:39.001532 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 29 12:04:39.001800 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:04:39.005848 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:04:39.006053 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:04:39.008188 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:04:39.008253 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:04:39.014329 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:04:39.015424 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:04:39.015535 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:04:39.017263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:04:39.017347 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:04:39.018196 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:04:39.018234 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:04:39.019177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:04:39.019214 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:04:39.022225 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:04:39.031688 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:04:39.031807 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:04:39.039970 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:04:39.041384 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:04:39.043291 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:04:39.043427 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:04:39.045405 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:04:39.045440 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:04:39.046664 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:04:39.046717 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:04:39.048360 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:04:39.048403 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:04:39.049981 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:04:39.050023 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:04:39.057374 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:04:39.058316 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:04:39.058393 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:04:39.061967 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:04:39.062067 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:04:39.064483 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:04:39.064548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:04:39.066304 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:04:39.066353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:39.068444 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:04:39.070173 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:04:39.071987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:04:39.083560 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:04:39.094136 systemd[1]: Switching root. Jan 29 12:04:39.139315 systemd-journald[237]: Journal stopped Jan 29 12:04:40.008601 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 29 12:04:40.008680 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:04:40.008697 kernel: SELinux: policy capability open_perms=1 Jan 29 12:04:40.008707 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:04:40.008717 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:04:40.008726 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:04:40.008736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:04:40.008745 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:04:40.008755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:04:40.008764 kernel: audit: type=1403 audit(1738152279.291:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:04:40.008777 systemd[1]: Successfully loaded SELinux policy in 33.675ms. Jan 29 12:04:40.008803 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.593ms. Jan 29 12:04:40.008814 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:04:40.008825 systemd[1]: Detected virtualization kvm. Jan 29 12:04:40.008835 systemd[1]: Detected architecture arm64. Jan 29 12:04:40.008846 systemd[1]: Detected first boot. Jan 29 12:04:40.008856 systemd[1]: Hostname set to . Jan 29 12:04:40.008871 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:04:40.008881 zram_generator::config[1053]: No configuration found. Jan 29 12:04:40.008894 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:04:40.008904 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:04:40.008915 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:04:40.008929 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:04:40.008941 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:04:40.008951 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:04:40.008961 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:04:40.008972 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:04:40.008984 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:04:40.008994 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:04:40.009004 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:04:40.009015 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:04:40.009027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:04:40.009051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:04:40.009063 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:04:40.009074 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:04:40.009085 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:04:40.009098 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:04:40.009109 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 12:04:40.009119 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:04:40.009130 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:04:40.009141 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:04:40.009165 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:04:40.009182 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:04:40.009193 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:04:40.009204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:04:40.009214 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:04:40.009225 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:04:40.009236 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:04:40.009246 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:04:40.009260 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:04:40.009272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:04:40.009285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:04:40.009300 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:04:40.009311 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:04:40.009321 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:04:40.009337 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:04:40.009349 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:04:40.009361 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:04:40.009372 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:04:40.009383 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:04:40.009393 systemd[1]: Reached target machines.target - Containers. Jan 29 12:04:40.009404 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:04:40.009415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:40.009426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:04:40.009436 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:04:40.009447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:04:40.009459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:04:40.009470 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:04:40.009480 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:04:40.009490 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:04:40.009502 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:04:40.009512 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:04:40.009523 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:04:40.009535 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:04:40.009548 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:04:40.009559 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:04:40.009570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:04:40.009581 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:04:40.009591 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:04:40.009606 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:04:40.009616 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:04:40.009626 systemd[1]: Stopped verity-setup.service. Jan 29 12:04:40.009636 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:04:40.009648 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:04:40.009659 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:04:40.009669 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:04:40.009680 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:04:40.009690 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:04:40.009702 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:04:40.009713 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:04:40.009727 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:04:40.009738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:04:40.009748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:04:40.009759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:04:40.009771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:04:40.009784 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:04:40.009794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:04:40.009805 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:04:40.009816 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:04:40.009826 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:04:40.009837 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:04:40.009847 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:04:40.009860 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:04:40.009870 kernel: fuse: init (API version 7.39) Jan 29 12:04:40.009880 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:04:40.009925 systemd-journald[1117]: Collecting audit messages is disabled. Jan 29 12:04:40.009947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:04:40.009958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:40.009969 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:04:40.009982 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:04:40.009993 systemd-journald[1117]: Journal started Jan 29 12:04:40.010015 systemd-journald[1117]: Runtime Journal (/run/log/journal/2941ac54bf2945c38ac9a7911a952ee2) is 8.0M, max 76.6M, 68.6M free. Jan 29 12:04:39.752059 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:04:40.018269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:04:40.020185 kernel: loop: module loaded Jan 29 12:04:40.020228 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:04:39.773795 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 12:04:39.774295 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:04:40.033341 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:04:40.033400 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:04:40.033418 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:04:40.034170 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:04:40.037292 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:04:40.037431 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:04:40.038282 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:04:40.038399 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:04:40.039374 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:04:40.040778 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:04:40.072280 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:04:40.077312 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:04:40.078834 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:04:40.079814 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:04:40.083481 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:04:40.085804 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:04:40.092237 kernel: ACPI: bus type drm_connector registered Jan 29 12:04:40.094123 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:04:40.102373 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:04:40.102536 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:04:40.120024 kernel: loop0: detected capacity change from 0 to 114432 Jan 29 12:04:40.124616 systemd-journald[1117]: Time spent on flushing to /var/log/journal/2941ac54bf2945c38ac9a7911a952ee2 is 68.023ms for 1138 entries. Jan 29 12:04:40.124616 systemd-journald[1117]: System Journal (/var/log/journal/2941ac54bf2945c38ac9a7911a952ee2) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:04:40.208650 systemd-journald[1117]: Received client request to flush runtime journal. Jan 29 12:04:40.209285 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:04:40.209303 kernel: loop1: detected capacity change from 0 to 8 Jan 29 12:04:40.154784 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:04:40.165858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:04:40.179116 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 29 12:04:40.179126 systemd-tmpfiles[1149]: ACLs are not supported, ignoring. Jan 29 12:04:40.182439 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:04:40.195501 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:04:40.198374 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:04:40.199019 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:04:40.208368 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:04:40.214481 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:04:40.226777 kernel: loop2: detected capacity change from 0 to 194096 Jan 29 12:04:40.228296 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:04:40.266133 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:04:40.271289 kernel: loop3: detected capacity change from 0 to 114328 Jan 29 12:04:40.277285 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:04:40.296717 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 29 12:04:40.296735 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 29 12:04:40.301397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:04:40.307893 kernel: loop4: detected capacity change from 0 to 114432 Jan 29 12:04:40.317493 kernel: loop5: detected capacity change from 0 to 8 Jan 29 12:04:40.317563 kernel: loop6: detected capacity change from 0 to 194096 Jan 29 12:04:40.334197 kernel: loop7: detected capacity change from 0 to 114328 Jan 29 12:04:40.346498 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 12:04:40.347596 (sd-merge)[1195]: Merged extensions into '/usr'. Jan 29 12:04:40.356685 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:04:40.356819 systemd[1]: Reloading... Jan 29 12:04:40.454176 zram_generator::config[1217]: No configuration found. Jan 29 12:04:40.569437 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:04:40.591986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:40.638793 systemd[1]: Reloading finished in 280 ms. Jan 29 12:04:40.669421 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:04:40.670758 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:04:40.682367 systemd[1]: Starting ensure-sysext.service... Jan 29 12:04:40.685254 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:04:40.692769 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:04:40.692912 systemd[1]: Reloading... Jan 29 12:04:40.727643 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:04:40.727892 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:04:40.728550 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:04:40.728755 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 29 12:04:40.728811 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jan 29 12:04:40.734571 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:04:40.734586 systemd-tmpfiles[1259]: Skipping /boot Jan 29 12:04:40.752012 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:04:40.752074 systemd-tmpfiles[1259]: Skipping /boot Jan 29 12:04:40.785180 zram_generator::config[1285]: No configuration found. Jan 29 12:04:40.909718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:40.955780 systemd[1]: Reloading finished in 262 ms. Jan 29 12:04:40.977654 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:04:40.985602 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:04:41.002652 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:04:41.007454 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:04:41.017560 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:04:41.021958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:04:41.024579 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:04:41.030276 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:04:41.032737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:41.035547 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:04:41.042238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:04:41.047478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:04:41.048156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:41.054408 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:04:41.056574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:41.056741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:41.059961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:41.074439 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:04:41.075378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:41.076380 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:04:41.089212 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:04:41.092101 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Jan 29 12:04:41.094802 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:04:41.097372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:04:41.098885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:04:41.099052 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:04:41.100542 systemd[1]: Finished ensure-sysext.service. Jan 29 12:04:41.108245 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:04:41.109770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:04:41.110625 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:04:41.120623 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:04:41.132484 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:04:41.133400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:04:41.133689 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:04:41.137347 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:04:41.139229 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:04:41.146971 augenrules[1360]: No rules Jan 29 12:04:41.150723 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:04:41.154211 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:04:41.168376 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:04:41.171371 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:04:41.172406 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:04:41.173983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:04:41.244553 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:04:41.246187 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:04:41.260869 systemd-networkd[1376]: lo: Link UP Jan 29 12:04:41.260882 systemd-networkd[1376]: lo: Gained carrier Jan 29 12:04:41.265050 systemd-networkd[1376]: Enumeration completed Jan 29 12:04:41.265165 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:04:41.276469 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:04:41.280854 systemd-resolved[1334]: Positive Trust Anchors: Jan 29 12:04:41.280903 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:04:41.280937 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:04:41.287841 systemd-resolved[1334]: Using system hostname 'ci-4081-3-0-2-f17d477515'. Jan 29 12:04:41.291738 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:04:41.293332 systemd[1]: Reached target network.target - Network. Jan 29 12:04:41.293807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:04:41.299347 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 12:04:41.361521 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:41.361533 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:04:41.363063 systemd-networkd[1376]: eth1: Link UP Jan 29 12:04:41.363070 systemd-networkd[1376]: eth1: Gained carrier Jan 29 12:04:41.363085 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:41.383457 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:41.383471 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:04:41.384431 systemd-networkd[1376]: eth0: Link UP Jan 29 12:04:41.384437 systemd-networkd[1376]: eth0: Gained carrier Jan 29 12:04:41.384452 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:04:41.394115 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:04:41.395679 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 12:04:41.416160 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1377) Jan 29 12:04:41.416232 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:04:41.428597 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 12:04:41.428799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:04:41.448657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:04:41.449255 systemd-networkd[1376]: eth0: DHCPv4 address 159.69.53.160/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 12:04:41.450771 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 12:04:41.454789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:04:41.460360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:04:41.460988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:04:41.461045 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:04:41.461417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:04:41.462285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:04:41.468561 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 12:04:41.469311 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 12:04:41.469346 kernel: [drm] features: -context_init Jan 29 12:04:41.484004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:04:41.486015 kernel: [drm] number of scanouts: 1 Jan 29 12:04:41.486078 kernel: [drm] number of cap sets: 0 Jan 29 12:04:41.485405 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:04:41.486521 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:04:41.488161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:04:41.490137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:04:41.490247 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:04:41.498878 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 12:04:41.506398 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:04:41.517374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:04:41.523310 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 12:04:41.538061 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 12:04:41.528613 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:04:41.545308 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 12:04:41.556749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:04:41.556978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:41.571625 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:04:41.630293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:04:41.699795 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:04:41.707415 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:04:41.723164 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:04:41.751292 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:04:41.753563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:04:41.754404 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:04:41.755142 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:04:41.755915 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:04:41.756865 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:04:41.757637 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:04:41.758499 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:04:41.759233 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:04:41.759340 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:04:41.759841 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:04:41.761583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:04:41.763587 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:04:41.769177 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:04:41.771344 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:04:41.772628 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:04:41.773447 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:04:41.774058 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:04:41.774689 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:04:41.774784 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:04:41.776877 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:04:41.781058 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:04:41.782943 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:04:41.788334 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:04:41.790749 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:04:41.794466 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:04:41.795019 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:04:41.798386 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:04:41.803246 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:04:41.810588 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 12:04:41.819333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:04:41.827420 jq[1448]: false Jan 29 12:04:41.823345 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:04:41.828467 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:04:41.830919 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:04:41.832490 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:04:41.834757 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:04:41.838265 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:04:41.841213 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:04:41.842670 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:04:41.843206 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:04:41.844940 coreos-metadata[1446]: Jan 29 12:04:41.844 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 12:04:41.853206 coreos-metadata[1446]: Jan 29 12:04:41.849 INFO Fetch successful Jan 29 12:04:41.853206 coreos-metadata[1446]: Jan 29 12:04:41.849 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 12:04:41.853206 coreos-metadata[1446]: Jan 29 12:04:41.850 INFO Fetch successful Jan 29 12:04:41.884111 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:04:41.884309 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:04:41.889821 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:04:41.890117 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:04:41.900972 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:04:41.911501 dbus-daemon[1447]: [system] SELinux support is enabled Jan 29 12:04:41.911817 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:04:41.915083 jq[1461]: true Jan 29 12:04:41.917442 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:04:41.917508 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:04:41.919269 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:04:41.919304 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:04:41.923237 update_engine[1460]: I20250129 12:04:41.922936 1460 main.cc:92] Flatcar Update Engine starting Jan 29 12:04:41.930749 update_engine[1460]: I20250129 12:04:41.930711 1460 update_check_scheduler.cc:74] Next update check in 6m30s Jan 29 12:04:41.932128 extend-filesystems[1449]: Found loop4 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found loop5 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found loop6 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found loop7 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found sda Jan 29 12:04:41.932128 extend-filesystems[1449]: Found sda1 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found sda2 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found sda3 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found usr Jan 29 12:04:41.932128 extend-filesystems[1449]: Found sda4 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found sda6 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found sda7 Jan 29 12:04:41.932128 extend-filesystems[1449]: Found sda9 Jan 29 12:04:41.932128 extend-filesystems[1449]: Checking size of /dev/sda9 Jan 29 12:04:41.978051 tar[1466]: linux-arm64/helm Jan 29 12:04:41.938948 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:04:41.991274 extend-filesystems[1449]: Resized partition /dev/sda9 Jan 29 12:04:41.949337 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:04:41.998445 extend-filesystems[1495]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:04:42.001298 jq[1485]: true Jan 29 12:04:42.007709 systemd-logind[1459]: New seat seat0. Jan 29 12:04:42.017205 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 12:04:42.021180 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 12:04:42.021203 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 12:04:42.021421 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:04:42.058705 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:04:42.059706 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:04:42.111225 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1368) Jan 29 12:04:42.137222 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:04:42.139189 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:04:42.150724 systemd[1]: Starting sshkeys.service... Jan 29 12:04:42.188982 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:04:42.205726 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:04:42.209185 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 12:04:42.217640 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:04:42.225357 extend-filesystems[1495]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 12:04:42.225357 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 12:04:42.225357 extend-filesystems[1495]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 12:04:42.232878 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Jan 29 12:04:42.232878 extend-filesystems[1449]: Found sr0 Jan 29 12:04:42.226297 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:04:42.228220 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:04:42.264521 coreos-metadata[1529]: Jan 29 12:04:42.260 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 12:04:42.264521 coreos-metadata[1529]: Jan 29 12:04:42.262 INFO Fetch successful Jan 29 12:04:42.265425 containerd[1474]: time="2025-01-29T12:04:42.263013240Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:04:42.265741 unknown[1529]: wrote ssh authorized keys file for user: core Jan 29 12:04:42.304728 update-ssh-keys[1535]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:04:42.308422 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:04:42.316230 systemd[1]: Finished sshkeys.service. Jan 29 12:04:42.341442 containerd[1474]: time="2025-01-29T12:04:42.341388000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:42.343651 containerd[1474]: time="2025-01-29T12:04:42.343611840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:42.343764 containerd[1474]: time="2025-01-29T12:04:42.343748360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.343808120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.343969720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.343989240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.344099000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.344116800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.344303000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.344320200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.344334600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.344344400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.344415120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345173 containerd[1474]: time="2025-01-29T12:04:42.344605040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345402 containerd[1474]: time="2025-01-29T12:04:42.344710680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:04:42.345402 containerd[1474]: time="2025-01-29T12:04:42.344725360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:04:42.345402 containerd[1474]: time="2025-01-29T12:04:42.344793800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:04:42.345402 containerd[1474]: time="2025-01-29T12:04:42.344831240Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:04:42.356156 containerd[1474]: time="2025-01-29T12:04:42.355833000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:04:42.356156 containerd[1474]: time="2025-01-29T12:04:42.355902160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:04:42.356156 containerd[1474]: time="2025-01-29T12:04:42.355921440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:04:42.356156 containerd[1474]: time="2025-01-29T12:04:42.355936960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:04:42.356156 containerd[1474]: time="2025-01-29T12:04:42.355951240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:04:42.356156 containerd[1474]: time="2025-01-29T12:04:42.356136440Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356401240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356513400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356530960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356543800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356557560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356570120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356582520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356596160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356610600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356623640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356636200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356648960Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356669200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358154 containerd[1474]: time="2025-01-29T12:04:42.356682600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356694800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356707600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356719800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356732920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356748920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356762400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356774840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356789800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356801640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356812600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356826040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356843080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356862680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356874240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358413 containerd[1474]: time="2025-01-29T12:04:42.356884920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:04:42.358655 containerd[1474]: time="2025-01-29T12:04:42.357000800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:04:42.358655 containerd[1474]: time="2025-01-29T12:04:42.357053200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:04:42.358655 containerd[1474]: time="2025-01-29T12:04:42.357068320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:04:42.358655 containerd[1474]: time="2025-01-29T12:04:42.357080720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:04:42.358655 containerd[1474]: time="2025-01-29T12:04:42.357089760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358655 containerd[1474]: time="2025-01-29T12:04:42.357113640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:04:42.358655 containerd[1474]: time="2025-01-29T12:04:42.357123720Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:04:42.358655 containerd[1474]: time="2025-01-29T12:04:42.357133760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:04:42.358787 containerd[1474]: time="2025-01-29T12:04:42.357482680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:04:42.358787 containerd[1474]: time="2025-01-29T12:04:42.357541560Z" level=info msg="Connect containerd service" Jan 29 12:04:42.358787 containerd[1474]: time="2025-01-29T12:04:42.357647160Z" level=info msg="using legacy CRI server" Jan 29 12:04:42.358787 containerd[1474]: time="2025-01-29T12:04:42.357654560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:04:42.358787 containerd[1474]: time="2025-01-29T12:04:42.357735880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:04:42.361482 containerd[1474]: time="2025-01-29T12:04:42.361451800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:04:42.362067 containerd[1474]: time="2025-01-29T12:04:42.361602480Z" level=info msg="Start subscribing containerd event" Jan 29 12:04:42.362067 containerd[1474]: time="2025-01-29T12:04:42.361665560Z" level=info msg="Start recovering state" Jan 29 12:04:42.362067 containerd[1474]: time="2025-01-29T12:04:42.361732920Z" level=info msg="Start event monitor" Jan 29 12:04:42.362067 containerd[1474]: time="2025-01-29T12:04:42.361743000Z" level=info msg="Start snapshots syncer" Jan 29 12:04:42.362067 containerd[1474]: time="2025-01-29T12:04:42.361753280Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:04:42.362067 containerd[1474]: time="2025-01-29T12:04:42.361760680Z" level=info msg="Start streaming server" Jan 29 12:04:42.362863 containerd[1474]: time="2025-01-29T12:04:42.362805520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:04:42.362863 containerd[1474]: time="2025-01-29T12:04:42.362860400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:04:42.363904 containerd[1474]: time="2025-01-29T12:04:42.363197800Z" level=info msg="containerd successfully booted in 0.102927s" Jan 29 12:04:42.363290 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:04:42.578100 tar[1466]: linux-arm64/LICENSE Jan 29 12:04:42.578100 tar[1466]: linux-arm64/README.md Jan 29 12:04:42.587492 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:04:42.721313 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 29 12:04:42.722254 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 12:04:42.726637 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:04:42.727850 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:04:42.735626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:42.739384 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:04:42.761190 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:04:42.913940 systemd-networkd[1376]: eth1: Gained IPv6LL Jan 29 12:04:42.915678 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 12:04:43.319293 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:04:43.343413 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:04:43.352760 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:04:43.363318 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:04:43.363909 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:04:43.374177 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:04:43.385142 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:04:43.391714 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:04:43.399524 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 12:04:43.400787 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:04:43.427318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:43.428436 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:04:43.429571 systemd[1]: Startup finished in 758ms (kernel) + 5.601s (initrd) + 4.171s (userspace) = 10.531s. Jan 29 12:04:43.430430 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:43.997126 kubelet[1576]: E0129 12:04:43.996999 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:44.001094 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:44.001314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:54.252196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:04:54.261501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:54.365083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:54.369651 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:54.434345 kubelet[1596]: E0129 12:04:54.434263 1596 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:54.438728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:54.438857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:05:04.643180 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:05:04.649355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:05:04.779432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:05:04.781427 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:05:04.829977 kubelet[1612]: E0129 12:05:04.829874 1612 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:05:04.833482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:05:04.833786 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:05:13.691082 systemd-timesyncd[1356]: Contacted time server 193.203.3.170:123 (2.flatcar.pool.ntp.org). Jan 29 12:05:13.691174 systemd-timesyncd[1356]: Initial clock synchronization to Wed 2025-01-29 12:05:13.690862 UTC. Jan 29 12:05:13.691782 systemd-resolved[1334]: Clock change detected. Flushing caches. Jan 29 12:05:15.316365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:05:15.334797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:05:15.439763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:05:15.452966 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:05:15.495877 kubelet[1627]: E0129 12:05:15.495815 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:05:15.498899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:05:15.499067 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:05:25.566397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:05:25.572625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:05:25.674595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:05:25.677779 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:05:25.716799 kubelet[1644]: E0129 12:05:25.716738 1644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:05:25.719050 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:05:25.719191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:05:27.338493 update_engine[1460]: I20250129 12:05:27.337845 1460 update_attempter.cc:509] Updating boot flags... Jan 29 12:05:27.390458 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1661) Jan 29 12:05:27.446059 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1657) Jan 29 12:05:27.494438 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1657) Jan 29 12:05:35.816193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 12:05:35.827756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:05:35.944134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:05:35.949286 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:05:35.990969 kubelet[1681]: E0129 12:05:35.990894 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:05:35.995305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:05:35.995633 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:05:46.066037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 12:05:46.077796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:05:46.177428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:05:46.189320 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:05:46.238094 kubelet[1697]: E0129 12:05:46.238046 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:05:46.241027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:05:46.241221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:05:56.316002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 12:05:56.324814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:05:56.433178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:05:56.448949 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:05:56.494730 kubelet[1713]: E0129 12:05:56.494684 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:05:56.496918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:05:56.497104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:06:06.566207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 12:06:06.579969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:06:06.694553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:06:06.707220 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:06:06.754231 kubelet[1728]: E0129 12:06:06.754109 1728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:06:06.757205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:06:06.757507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:06:16.816112 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 12:06:16.821613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:06:16.934737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:06:16.938772 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:06:16.986288 kubelet[1744]: E0129 12:06:16.986217 1744 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:06:16.989855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:06:16.990133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:06:27.066116 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 12:06:27.078810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:06:27.185597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:06:27.193849 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:06:27.240971 kubelet[1760]: E0129 12:06:27.240908 1760 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:06:27.243190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:06:27.243322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:06:37.316080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 12:06:37.329798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:06:37.434731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:06:37.445928 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:06:37.491935 kubelet[1776]: E0129 12:06:37.491878 1776 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:06:37.494061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:06:37.494190 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:06:37.644192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:06:37.651932 systemd[1]: Started sshd@0-159.69.53.160:22-139.178.89.65:58934.service - OpenSSH per-connection server daemon (139.178.89.65:58934). Jan 29 12:06:38.638552 sshd[1786]: Accepted publickey for core from 139.178.89.65 port 58934 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:38.641147 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:38.652177 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:06:38.668940 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:06:38.674482 systemd-logind[1459]: New session 1 of user core. Jan 29 12:06:38.685709 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:06:38.695130 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:06:38.699320 (systemd)[1790]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:06:38.807589 systemd[1790]: Queued start job for default target default.target. Jan 29 12:06:38.819456 systemd[1790]: Created slice app.slice - User Application Slice. Jan 29 12:06:38.819512 systemd[1790]: Reached target paths.target - Paths. Jan 29 12:06:38.819542 systemd[1790]: Reached target timers.target - Timers. Jan 29 12:06:38.822046 systemd[1790]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:06:38.836305 systemd[1790]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:06:38.836428 systemd[1790]: Reached target sockets.target - Sockets. Jan 29 12:06:38.836456 systemd[1790]: Reached target basic.target - Basic System. Jan 29 12:06:38.836531 systemd[1790]: Reached target default.target - Main User Target. Jan 29 12:06:38.836585 systemd[1790]: Startup finished in 129ms. Jan 29 12:06:38.836782 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:06:38.847143 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:06:39.545808 systemd[1]: Started sshd@1-159.69.53.160:22-139.178.89.65:58946.service - OpenSSH per-connection server daemon (139.178.89.65:58946). Jan 29 12:06:40.516908 sshd[1801]: Accepted publickey for core from 139.178.89.65 port 58946 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:40.519072 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:40.525117 systemd-logind[1459]: New session 2 of user core. Jan 29 12:06:40.530752 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:06:41.195036 sshd[1801]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:41.200204 systemd[1]: sshd@1-159.69.53.160:22-139.178.89.65:58946.service: Deactivated successfully. Jan 29 12:06:41.202242 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:06:41.203806 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:06:41.205866 systemd-logind[1459]: Removed session 2. Jan 29 12:06:41.364429 systemd[1]: Started sshd@2-159.69.53.160:22-139.178.89.65:33144.service - OpenSSH per-connection server daemon (139.178.89.65:33144). Jan 29 12:06:42.350763 sshd[1808]: Accepted publickey for core from 139.178.89.65 port 33144 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:42.353280 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:42.360009 systemd-logind[1459]: New session 3 of user core. Jan 29 12:06:42.366680 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:06:43.026843 sshd[1808]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:43.033131 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:06:43.034220 systemd[1]: sshd@2-159.69.53.160:22-139.178.89.65:33144.service: Deactivated successfully. Jan 29 12:06:43.036530 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:06:43.039316 systemd-logind[1459]: Removed session 3. Jan 29 12:06:43.196588 systemd[1]: Started sshd@3-159.69.53.160:22-139.178.89.65:33150.service - OpenSSH per-connection server daemon (139.178.89.65:33150). Jan 29 12:06:44.184075 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 33150 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:44.186642 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:44.194369 systemd-logind[1459]: New session 4 of user core. Jan 29 12:06:44.200710 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:06:44.861000 sshd[1815]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:44.865491 systemd[1]: sshd@3-159.69.53.160:22-139.178.89.65:33150.service: Deactivated successfully. Jan 29 12:06:44.867249 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:06:44.868333 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:06:44.869637 systemd-logind[1459]: Removed session 4. Jan 29 12:06:45.040889 systemd[1]: Started sshd@4-159.69.53.160:22-139.178.89.65:33166.service - OpenSSH per-connection server daemon (139.178.89.65:33166). Jan 29 12:06:46.016254 sshd[1822]: Accepted publickey for core from 139.178.89.65 port 33166 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:46.018707 sshd[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:46.024289 systemd-logind[1459]: New session 5 of user core. Jan 29 12:06:46.029729 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:06:46.547165 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:06:46.547463 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:06:46.563614 sudo[1825]: pam_unix(sudo:session): session closed for user root Jan 29 12:06:46.724101 sshd[1822]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:46.729873 systemd[1]: sshd@4-159.69.53.160:22-139.178.89.65:33166.service: Deactivated successfully. Jan 29 12:06:46.733751 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:06:46.734759 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:06:46.737037 systemd-logind[1459]: Removed session 5. Jan 29 12:06:46.906247 systemd[1]: Started sshd@5-159.69.53.160:22-139.178.89.65:33182.service - OpenSSH per-connection server daemon (139.178.89.65:33182). Jan 29 12:06:47.565632 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 12:06:47.573721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:06:47.684062 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:06:47.687868 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:06:47.727332 kubelet[1840]: E0129 12:06:47.727244 1840 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:06:47.730034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:06:47.730180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:06:47.897287 sshd[1830]: Accepted publickey for core from 139.178.89.65 port 33182 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:47.898884 sshd[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:47.903222 systemd-logind[1459]: New session 6 of user core. Jan 29 12:06:47.910715 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:06:48.426152 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:06:48.426877 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:06:48.433684 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 29 12:06:48.439796 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:06:48.440164 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:06:48.459890 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:06:48.460388 auditctl[1853]: No rules Jan 29 12:06:48.461817 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:06:48.462231 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:06:48.466399 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:06:48.509307 augenrules[1871]: No rules Jan 29 12:06:48.510756 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:06:48.512266 sudo[1849]: pam_unix(sudo:session): session closed for user root Jan 29 12:06:48.674824 sshd[1830]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:48.680230 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:06:48.681255 systemd[1]: sshd@5-159.69.53.160:22-139.178.89.65:33182.service: Deactivated successfully. Jan 29 12:06:48.683959 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:06:48.686054 systemd-logind[1459]: Removed session 6. Jan 29 12:06:48.846793 systemd[1]: Started sshd@6-159.69.53.160:22-139.178.89.65:33186.service - OpenSSH per-connection server daemon (139.178.89.65:33186). Jan 29 12:06:49.815498 sshd[1879]: Accepted publickey for core from 139.178.89.65 port 33186 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:06:49.817559 sshd[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:49.823986 systemd-logind[1459]: New session 7 of user core. Jan 29 12:06:49.827735 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:06:50.333493 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:06:50.334160 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:06:50.630746 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:06:50.632346 (dockerd)[1898]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:06:50.869463 dockerd[1898]: time="2025-01-29T12:06:50.868685309Z" level=info msg="Starting up" Jan 29 12:06:50.963467 dockerd[1898]: time="2025-01-29T12:06:50.963141782Z" level=info msg="Loading containers: start." Jan 29 12:06:51.066450 kernel: Initializing XFRM netlink socket Jan 29 12:06:51.147629 systemd-networkd[1376]: docker0: Link UP Jan 29 12:06:51.169106 dockerd[1898]: time="2025-01-29T12:06:51.169001853Z" level=info msg="Loading containers: done." Jan 29 12:06:51.185792 dockerd[1898]: time="2025-01-29T12:06:51.185727649Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:06:51.185982 dockerd[1898]: time="2025-01-29T12:06:51.185845927Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:06:51.185982 dockerd[1898]: time="2025-01-29T12:06:51.185968324Z" level=info msg="Daemon has completed initialization" Jan 29 12:06:51.222671 dockerd[1898]: time="2025-01-29T12:06:51.222480778Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:06:51.223592 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:06:52.321263 containerd[1474]: time="2025-01-29T12:06:52.320944486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:06:53.015091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957590427.mount: Deactivated successfully. Jan 29 12:06:54.318430 containerd[1474]: time="2025-01-29T12:06:54.318322029Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 29 12:06:54.320045 containerd[1474]: time="2025-01-29T12:06:54.319272612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:54.322441 containerd[1474]: time="2025-01-29T12:06:54.322378797Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:54.323575 containerd[1474]: time="2025-01-29T12:06:54.323539776Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.002549211s" Jan 29 12:06:54.323639 containerd[1474]: time="2025-01-29T12:06:54.323580055Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 12:06:54.324698 containerd[1474]: time="2025-01-29T12:06:54.324636797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:54.352085 containerd[1474]: time="2025-01-29T12:06:54.352018989Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:06:55.744446 containerd[1474]: time="2025-01-29T12:06:55.743270840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:55.745558 containerd[1474]: time="2025-01-29T12:06:55.745528401Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 29 12:06:55.748461 containerd[1474]: time="2025-01-29T12:06:55.748367952Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:55.754263 containerd[1474]: time="2025-01-29T12:06:55.754228371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:55.755607 containerd[1474]: time="2025-01-29T12:06:55.755576468Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.40349084s" Jan 29 12:06:55.755726 containerd[1474]: time="2025-01-29T12:06:55.755705265Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 12:06:55.778742 containerd[1474]: time="2025-01-29T12:06:55.778689508Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:06:56.832153 containerd[1474]: time="2025-01-29T12:06:56.831928398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:56.833956 containerd[1474]: time="2025-01-29T12:06:56.833147337Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 29 12:06:56.834924 containerd[1474]: time="2025-01-29T12:06:56.834866228Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:56.843375 containerd[1474]: time="2025-01-29T12:06:56.843297366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:56.845053 containerd[1474]: time="2025-01-29T12:06:56.845009097Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.066268151s" Jan 29 12:06:56.845053 containerd[1474]: time="2025-01-29T12:06:56.845051017Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 12:06:56.866158 containerd[1474]: time="2025-01-29T12:06:56.866124382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:06:57.815630 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 12:06:57.822661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:06:57.855868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124984909.mount: Deactivated successfully. Jan 29 12:06:57.944576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:06:57.951061 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:06:58.003677 kubelet[2134]: E0129 12:06:58.003636 2134 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:06:58.007089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:06:58.007213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:06:58.229929 containerd[1474]: time="2025-01-29T12:06:58.229071189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:58.230834 containerd[1474]: time="2025-01-29T12:06:58.230758522Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 29 12:06:58.232285 containerd[1474]: time="2025-01-29T12:06:58.232221499Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:58.234384 containerd[1474]: time="2025-01-29T12:06:58.234315946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:58.235258 containerd[1474]: time="2025-01-29T12:06:58.235220211Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.368888673s" Jan 29 12:06:58.235745 containerd[1474]: time="2025-01-29T12:06:58.235350329Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 12:06:58.258888 containerd[1474]: time="2025-01-29T12:06:58.258846795Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:06:58.830114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715427512.mount: Deactivated successfully. Jan 29 12:06:59.400229 containerd[1474]: time="2025-01-29T12:06:59.400168871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:59.402435 containerd[1474]: time="2025-01-29T12:06:59.402329277Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 29 12:06:59.402761 containerd[1474]: time="2025-01-29T12:06:59.402653592Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:59.406283 containerd[1474]: time="2025-01-29T12:06:59.405807463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:06:59.407081 containerd[1474]: time="2025-01-29T12:06:59.407040124Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.148146769s" Jan 29 12:06:59.407081 containerd[1474]: time="2025-01-29T12:06:59.407074404Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 12:06:59.430336 containerd[1474]: time="2025-01-29T12:06:59.429337019Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:07:00.000045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799920097.mount: Deactivated successfully. Jan 29 12:07:00.007438 containerd[1474]: time="2025-01-29T12:07:00.006556997Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:00.007438 containerd[1474]: time="2025-01-29T12:07:00.007378225Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 29 12:07:00.008186 containerd[1474]: time="2025-01-29T12:07:00.008145173Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:00.011010 containerd[1474]: time="2025-01-29T12:07:00.010951211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:00.011916 containerd[1474]: time="2025-01-29T12:07:00.011807798Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 582.43222ms" Jan 29 12:07:00.011916 containerd[1474]: time="2025-01-29T12:07:00.011838278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 12:07:00.031634 containerd[1474]: time="2025-01-29T12:07:00.031597540Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:07:00.613207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3260711526.mount: Deactivated successfully. Jan 29 12:07:02.007829 containerd[1474]: time="2025-01-29T12:07:02.007760312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:02.009753 containerd[1474]: time="2025-01-29T12:07:02.009694925Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 29 12:07:02.010699 containerd[1474]: time="2025-01-29T12:07:02.010613832Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:02.014237 containerd[1474]: time="2025-01-29T12:07:02.014198940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:02.016242 containerd[1474]: time="2025-01-29T12:07:02.015677599Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 1.98404122s" Jan 29 12:07:02.016242 containerd[1474]: time="2025-01-29T12:07:02.015723559Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 12:07:08.065780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 29 12:07:08.075501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:08.185609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:08.193955 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:07:08.244015 kubelet[2315]: E0129 12:07:08.243565 2315 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:07:08.245991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:07:08.246122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:07:08.592873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:08.600693 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:08.638658 systemd[1]: Reloading requested from client PID 2330 ('systemctl') (unit session-7.scope)... Jan 29 12:07:08.638816 systemd[1]: Reloading... Jan 29 12:07:08.767491 zram_generator::config[2382]: No configuration found. Jan 29 12:07:08.846230 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:07:08.913462 systemd[1]: Reloading finished in 274 ms. Jan 29 12:07:08.955994 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:07:08.956071 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:07:08.956298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:08.961964 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:09.080803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:09.080914 (kubelet)[2419]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:07:09.131650 kubelet[2419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:07:09.131650 kubelet[2419]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:07:09.131650 kubelet[2419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:07:09.133041 kubelet[2419]: I0129 12:07:09.132949 2419 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:07:09.954723 kubelet[2419]: I0129 12:07:09.954592 2419 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:07:09.954723 kubelet[2419]: I0129 12:07:09.954642 2419 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:07:09.955037 kubelet[2419]: I0129 12:07:09.954997 2419 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:07:09.972562 kubelet[2419]: E0129 12:07:09.972528 2419 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://159.69.53.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:09.973801 kubelet[2419]: I0129 12:07:09.973682 2419 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:07:09.981398 kubelet[2419]: I0129 12:07:09.981311 2419 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:07:09.981939 kubelet[2419]: I0129 12:07:09.981877 2419 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:07:09.982136 kubelet[2419]: I0129 12:07:09.981916 2419 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-f17d477515","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:07:09.982234 kubelet[2419]: I0129 12:07:09.982193 2419 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:07:09.982234 kubelet[2419]: I0129 12:07:09.982203 2419 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:07:09.982495 kubelet[2419]: I0129 12:07:09.982462 2419 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:07:09.983594 kubelet[2419]: I0129 12:07:09.983569 2419 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:07:09.983594 kubelet[2419]: I0129 12:07:09.983591 2419 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:07:09.984068 kubelet[2419]: I0129 12:07:09.984037 2419 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:07:09.984131 kubelet[2419]: I0129 12:07:09.984120 2419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:07:09.986090 kubelet[2419]: W0129 12:07:09.986031 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.53.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-f17d477515&limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:09.986090 kubelet[2419]: E0129 12:07:09.986087 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://159.69.53.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-f17d477515&limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:09.986603 kubelet[2419]: W0129 12:07:09.986368 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.69.53.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:09.986603 kubelet[2419]: E0129 12:07:09.986438 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://159.69.53.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:09.986979 kubelet[2419]: I0129 12:07:09.986953 2419 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:07:09.987468 kubelet[2419]: I0129 12:07:09.987449 2419 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:07:09.989375 kubelet[2419]: W0129 12:07:09.987725 2419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:07:09.989375 kubelet[2419]: I0129 12:07:09.989030 2419 server.go:1264] "Started kubelet" Jan 29 12:07:09.996353 kubelet[2419]: E0129 12:07:09.996145 2419 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.53.160:6443/api/v1/namespaces/default/events\": dial tcp 159.69.53.160:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-2-f17d477515.181f286d0af9fc92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-f17d477515,UID:ci-4081-3-0-2-f17d477515,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-f17d477515,},FirstTimestamp:2025-01-29 12:07:09.989002386 +0000 UTC m=+0.902603332,LastTimestamp:2025-01-29 12:07:09.989002386 +0000 UTC m=+0.902603332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-f17d477515,}" Jan 29 12:07:09.997828 kubelet[2419]: I0129 12:07:09.997808 2419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:07:10.003295 kubelet[2419]: I0129 12:07:10.003244 2419 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:07:10.004350 kubelet[2419]: I0129 12:07:10.004322 2419 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:07:10.005369 kubelet[2419]: I0129 12:07:10.005331 2419 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:07:10.006519 kubelet[2419]: I0129 12:07:10.006450 2419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:07:10.006736 kubelet[2419]: I0129 12:07:10.006713 2419 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:07:10.008475 kubelet[2419]: E0129 12:07:10.008447 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.53.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-f17d477515?timeout=10s\": dial tcp 159.69.53.160:6443: connect: connection refused" interval="200ms" Jan 29 12:07:10.009261 kubelet[2419]: I0129 12:07:10.008749 2419 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:07:10.009401 kubelet[2419]: I0129 12:07:10.009383 2419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:07:10.009725 kubelet[2419]: I0129 12:07:10.008981 2419 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:07:10.009798 kubelet[2419]: I0129 12:07:10.009005 2419 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:07:10.010855 kubelet[2419]: W0129 12:07:10.010817 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.53.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:10.010946 kubelet[2419]: E0129 12:07:10.010936 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://159.69.53.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:10.011555 kubelet[2419]: I0129 12:07:10.011536 2419 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:07:10.018877 kubelet[2419]: I0129 12:07:10.018826 2419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:07:10.019846 kubelet[2419]: I0129 12:07:10.019817 2419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:07:10.019977 kubelet[2419]: I0129 12:07:10.019969 2419 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:07:10.020002 kubelet[2419]: I0129 12:07:10.019993 2419 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:07:10.020057 kubelet[2419]: E0129 12:07:10.020032 2419 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:07:10.026217 kubelet[2419]: W0129 12:07:10.026045 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.69.53.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:10.026217 kubelet[2419]: E0129 12:07:10.026125 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://159.69.53.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:10.040584 kubelet[2419]: I0129 12:07:10.040309 2419 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:07:10.040584 kubelet[2419]: I0129 12:07:10.040326 2419 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:07:10.040584 kubelet[2419]: I0129 12:07:10.040343 2419 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:07:10.042742 kubelet[2419]: I0129 12:07:10.042644 2419 policy_none.go:49] "None policy: Start" Jan 29 12:07:10.043447 kubelet[2419]: I0129 12:07:10.043281 2419 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:07:10.043447 kubelet[2419]: I0129 12:07:10.043306 2419 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:07:10.052099 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:07:10.066920 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:07:10.072727 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:07:10.085447 kubelet[2419]: I0129 12:07:10.085178 2419 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:07:10.085658 kubelet[2419]: I0129 12:07:10.085550 2419 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:07:10.085899 kubelet[2419]: I0129 12:07:10.085722 2419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:07:10.089288 kubelet[2419]: E0129 12:07:10.089250 2419 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-2-f17d477515\" not found" Jan 29 12:07:10.108263 kubelet[2419]: I0129 12:07:10.107735 2419 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.108263 kubelet[2419]: E0129 12:07:10.108178 2419 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.69.53.160:6443/api/v1/nodes\": dial tcp 159.69.53.160:6443: connect: connection refused" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.120806 kubelet[2419]: I0129 12:07:10.120707 2419 topology_manager.go:215] "Topology Admit Handler" podUID="82ac880ae6e4838db939725877367ab8" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.123917 kubelet[2419]: I0129 12:07:10.123824 2419 topology_manager.go:215] "Topology Admit Handler" podUID="59e3b525b6bf64f15cae39b9342b9676" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.126571 kubelet[2419]: I0129 12:07:10.126518 2419 topology_manager.go:215] "Topology Admit Handler" podUID="2c24137695be8b0f524748bf812c8a73" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.136961 systemd[1]: Created slice kubepods-burstable-pod82ac880ae6e4838db939725877367ab8.slice - libcontainer container kubepods-burstable-pod82ac880ae6e4838db939725877367ab8.slice. Jan 29 12:07:10.154723 systemd[1]: Created slice kubepods-burstable-pod59e3b525b6bf64f15cae39b9342b9676.slice - libcontainer container kubepods-burstable-pod59e3b525b6bf64f15cae39b9342b9676.slice. Jan 29 12:07:10.171801 systemd[1]: Created slice kubepods-burstable-pod2c24137695be8b0f524748bf812c8a73.slice - libcontainer container kubepods-burstable-pod2c24137695be8b0f524748bf812c8a73.slice. Jan 29 12:07:10.210584 kubelet[2419]: E0129 12:07:10.210254 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.53.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-f17d477515?timeout=10s\": dial tcp 159.69.53.160:6443: connect: connection refused" interval="400ms" Jan 29 12:07:10.212095 kubelet[2419]: I0129 12:07:10.210647 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.212095 kubelet[2419]: I0129 12:07:10.210705 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.212095 kubelet[2419]: I0129 12:07:10.210750 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82ac880ae6e4838db939725877367ab8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-f17d477515\" (UID: \"82ac880ae6e4838db939725877367ab8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.212095 kubelet[2419]: I0129 12:07:10.210792 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82ac880ae6e4838db939725877367ab8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-f17d477515\" (UID: \"82ac880ae6e4838db939725877367ab8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.212095 kubelet[2419]: I0129 12:07:10.210833 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.212396 kubelet[2419]: I0129 12:07:10.210870 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.212396 kubelet[2419]: I0129 12:07:10.210907 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c24137695be8b0f524748bf812c8a73-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-f17d477515\" (UID: \"2c24137695be8b0f524748bf812c8a73\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.212396 kubelet[2419]: I0129 12:07:10.210964 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82ac880ae6e4838db939725877367ab8-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-f17d477515\" (UID: \"82ac880ae6e4838db939725877367ab8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.212396 kubelet[2419]: I0129 12:07:10.211004 2419 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.310295 kubelet[2419]: I0129 12:07:10.310256 2419 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.310677 kubelet[2419]: E0129 12:07:10.310642 2419 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.69.53.160:6443/api/v1/nodes\": dial tcp 159.69.53.160:6443: connect: connection refused" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.451514 containerd[1474]: time="2025-01-29T12:07:10.451365396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-f17d477515,Uid:82ac880ae6e4838db939725877367ab8,Namespace:kube-system,Attempt:0,}" Jan 29 12:07:10.469784 containerd[1474]: time="2025-01-29T12:07:10.469494547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-f17d477515,Uid:59e3b525b6bf64f15cae39b9342b9676,Namespace:kube-system,Attempt:0,}" Jan 29 12:07:10.476159 containerd[1474]: time="2025-01-29T12:07:10.475853073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-f17d477515,Uid:2c24137695be8b0f524748bf812c8a73,Namespace:kube-system,Attempt:0,}" Jan 29 12:07:10.611725 kubelet[2419]: E0129 12:07:10.611649 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.53.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-f17d477515?timeout=10s\": dial tcp 159.69.53.160:6443: connect: connection refused" interval="800ms" Jan 29 12:07:10.714263 kubelet[2419]: I0129 12:07:10.714200 2419 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.714820 kubelet[2419]: E0129 12:07:10.714728 2419 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://159.69.53.160:6443/api/v1/nodes\": dial tcp 159.69.53.160:6443: connect: connection refused" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:10.984551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3138787704.mount: Deactivated successfully. Jan 29 12:07:10.990635 containerd[1474]: time="2025-01-29T12:07:10.990582442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:07:10.991504 containerd[1474]: time="2025-01-29T12:07:10.991430832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 12:07:10.992360 containerd[1474]: time="2025-01-29T12:07:10.992296662Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:07:10.995436 containerd[1474]: time="2025-01-29T12:07:10.993927964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:07:10.995436 containerd[1474]: time="2025-01-29T12:07:10.994001803Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:07:10.995436 containerd[1474]: time="2025-01-29T12:07:10.994841753Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:07:10.995436 containerd[1474]: time="2025-01-29T12:07:10.995379027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:07:10.998616 containerd[1474]: time="2025-01-29T12:07:10.998565710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:07:11.001274 containerd[1474]: time="2025-01-29T12:07:11.001240439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.749244ms" Jan 29 12:07:11.003271 containerd[1474]: time="2025-01-29T12:07:11.003225937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.290225ms" Jan 29 12:07:11.007019 containerd[1474]: time="2025-01-29T12:07:11.006980574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.364108ms" Jan 29 12:07:11.070838 kubelet[2419]: W0129 12:07:11.070756 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.53.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-f17d477515&limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:11.070953 kubelet[2419]: E0129 12:07:11.070855 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://159.69.53.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-2-f17d477515&limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:11.130534 containerd[1474]: time="2025-01-29T12:07:11.130372024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:07:11.130687 containerd[1474]: time="2025-01-29T12:07:11.130645301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:07:11.130687 containerd[1474]: time="2025-01-29T12:07:11.130668541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:11.131336 containerd[1474]: time="2025-01-29T12:07:11.131285094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:11.133219 containerd[1474]: time="2025-01-29T12:07:11.133131673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:07:11.133219 containerd[1474]: time="2025-01-29T12:07:11.133190352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:07:11.133335 containerd[1474]: time="2025-01-29T12:07:11.133211232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:11.133335 containerd[1474]: time="2025-01-29T12:07:11.133286471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:11.135152 kubelet[2419]: W0129 12:07:11.135118 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.69.53.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:11.135282 kubelet[2419]: E0129 12:07:11.135270 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://159.69.53.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:11.137285 containerd[1474]: time="2025-01-29T12:07:11.137207867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:07:11.137863 containerd[1474]: time="2025-01-29T12:07:11.137432984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:07:11.137863 containerd[1474]: time="2025-01-29T12:07:11.137804460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:11.137986 containerd[1474]: time="2025-01-29T12:07:11.137909379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:11.159792 systemd[1]: Started cri-containerd-05bf1492e382cad0f9f4193a6f7f1f9d23ceb9907935671d358003f37214f10f.scope - libcontainer container 05bf1492e382cad0f9f4193a6f7f1f9d23ceb9907935671d358003f37214f10f. Jan 29 12:07:11.161491 systemd[1]: Started cri-containerd-80920a7ef82fc95d3e35429a125078e0cde84cd23cf74d8d7b13bf0b1303e743.scope - libcontainer container 80920a7ef82fc95d3e35429a125078e0cde84cd23cf74d8d7b13bf0b1303e743. Jan 29 12:07:11.171612 systemd[1]: Started cri-containerd-5b41d9015392e385a753c934dab40a5886e45b5d06791997d5cdef50329ffd20.scope - libcontainer container 5b41d9015392e385a753c934dab40a5886e45b5d06791997d5cdef50329ffd20. Jan 29 12:07:11.211103 containerd[1474]: time="2025-01-29T12:07:11.210972676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-2-f17d477515,Uid:82ac880ae6e4838db939725877367ab8,Namespace:kube-system,Attempt:0,} returns sandbox id \"05bf1492e382cad0f9f4193a6f7f1f9d23ceb9907935671d358003f37214f10f\"" Jan 29 12:07:11.216817 containerd[1474]: time="2025-01-29T12:07:11.216632532Z" level=info msg="CreateContainer within sandbox \"05bf1492e382cad0f9f4193a6f7f1f9d23ceb9907935671d358003f37214f10f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:07:11.220362 containerd[1474]: time="2025-01-29T12:07:11.220323010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-2-f17d477515,Uid:2c24137695be8b0f524748bf812c8a73,Namespace:kube-system,Attempt:0,} returns sandbox id \"80920a7ef82fc95d3e35429a125078e0cde84cd23cf74d8d7b13bf0b1303e743\"" Jan 29 12:07:11.224386 containerd[1474]: time="2025-01-29T12:07:11.224216007Z" level=info msg="CreateContainer within sandbox \"80920a7ef82fc95d3e35429a125078e0cde84cd23cf74d8d7b13bf0b1303e743\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:07:11.230121 containerd[1474]: time="2025-01-29T12:07:11.229900663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-2-f17d477515,Uid:59e3b525b6bf64f15cae39b9342b9676,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b41d9015392e385a753c934dab40a5886e45b5d06791997d5cdef50329ffd20\"" Jan 29 12:07:11.233378 containerd[1474]: time="2025-01-29T12:07:11.233280264Z" level=info msg="CreateContainer within sandbox \"5b41d9015392e385a753c934dab40a5886e45b5d06791997d5cdef50329ffd20\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:07:11.243008 containerd[1474]: time="2025-01-29T12:07:11.241873528Z" level=info msg="CreateContainer within sandbox \"05bf1492e382cad0f9f4193a6f7f1f9d23ceb9907935671d358003f37214f10f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d39f69413c1859fd305b5a976208170a976e02d27af6bc0cd96780cde28256ec\"" Jan 29 12:07:11.244439 containerd[1474]: time="2025-01-29T12:07:11.244089583Z" level=info msg="StartContainer for \"d39f69413c1859fd305b5a976208170a976e02d27af6bc0cd96780cde28256ec\"" Jan 29 12:07:11.248446 containerd[1474]: time="2025-01-29T12:07:11.248399974Z" level=info msg="CreateContainer within sandbox \"80920a7ef82fc95d3e35429a125078e0cde84cd23cf74d8d7b13bf0b1303e743\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cbad7f37959dac8026d2b49b15e7b2ca558ce037d3c82de36bdb5afd2dd5b91a\"" Jan 29 12:07:11.249430 containerd[1474]: time="2025-01-29T12:07:11.249265684Z" level=info msg="StartContainer for \"cbad7f37959dac8026d2b49b15e7b2ca558ce037d3c82de36bdb5afd2dd5b91a\"" Jan 29 12:07:11.254198 containerd[1474]: time="2025-01-29T12:07:11.254141349Z" level=info msg="CreateContainer within sandbox \"5b41d9015392e385a753c934dab40a5886e45b5d06791997d5cdef50329ffd20\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f20580dfc18dbc6bea374c61e743f3ced0c61337da55777e658afe184f5ed4a\"" Jan 29 12:07:11.254666 containerd[1474]: time="2025-01-29T12:07:11.254555985Z" level=info msg="StartContainer for \"9f20580dfc18dbc6bea374c61e743f3ced0c61337da55777e658afe184f5ed4a\"" Jan 29 12:07:11.286237 systemd[1]: Started cri-containerd-d39f69413c1859fd305b5a976208170a976e02d27af6bc0cd96780cde28256ec.scope - libcontainer container d39f69413c1859fd305b5a976208170a976e02d27af6bc0cd96780cde28256ec. Jan 29 12:07:11.294635 systemd[1]: Started cri-containerd-cbad7f37959dac8026d2b49b15e7b2ca558ce037d3c82de36bdb5afd2dd5b91a.scope - libcontainer container cbad7f37959dac8026d2b49b15e7b2ca558ce037d3c82de36bdb5afd2dd5b91a. Jan 29 12:07:11.298382 systemd[1]: Started cri-containerd-9f20580dfc18dbc6bea374c61e743f3ced0c61337da55777e658afe184f5ed4a.scope - libcontainer container 9f20580dfc18dbc6bea374c61e743f3ced0c61337da55777e658afe184f5ed4a. Jan 29 12:07:11.362428 containerd[1474]: time="2025-01-29T12:07:11.361298182Z" level=info msg="StartContainer for \"cbad7f37959dac8026d2b49b15e7b2ca558ce037d3c82de36bdb5afd2dd5b91a\" returns successfully" Jan 29 12:07:11.364047 kubelet[2419]: W0129 12:07:11.363947 2419 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.53.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:11.364047 kubelet[2419]: E0129 12:07:11.364024 2419 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://159.69.53.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.53.160:6443: connect: connection refused Jan 29 12:07:11.364870 containerd[1474]: time="2025-01-29T12:07:11.364070071Z" level=info msg="StartContainer for \"9f20580dfc18dbc6bea374c61e743f3ced0c61337da55777e658afe184f5ed4a\" returns successfully" Jan 29 12:07:11.370079 containerd[1474]: time="2025-01-29T12:07:11.369397651Z" level=info msg="StartContainer for \"d39f69413c1859fd305b5a976208170a976e02d27af6bc0cd96780cde28256ec\" returns successfully" Jan 29 12:07:11.412698 kubelet[2419]: E0129 12:07:11.412605 2419 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.53.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-2-f17d477515?timeout=10s\": dial tcp 159.69.53.160:6443: connect: connection refused" interval="1.6s" Jan 29 12:07:11.518315 kubelet[2419]: I0129 12:07:11.517694 2419 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:13.715676 kubelet[2419]: E0129 12:07:13.715632 2419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-2-f17d477515\" not found" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:13.819843 kubelet[2419]: E0129 12:07:13.819738 2419 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-2-f17d477515.181f286d0af9fc92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-2-f17d477515,UID:ci-4081-3-0-2-f17d477515,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-2-f17d477515,},FirstTimestamp:2025-01-29 12:07:09.989002386 +0000 UTC m=+0.902603332,LastTimestamp:2025-01-29 12:07:09.989002386 +0000 UTC m=+0.902603332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-2-f17d477515,}" Jan 29 12:07:13.879301 kubelet[2419]: I0129 12:07:13.879195 2419 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:13.989547 kubelet[2419]: I0129 12:07:13.989255 2419 apiserver.go:52] "Watching apiserver" Jan 29 12:07:14.010381 kubelet[2419]: I0129 12:07:14.010206 2419 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:07:14.066641 kubelet[2419]: E0129 12:07:14.066582 2419 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-2-f17d477515\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.034611 systemd[1]: Reloading requested from client PID 2691 ('systemctl') (unit session-7.scope)... Jan 29 12:07:16.034627 systemd[1]: Reloading... Jan 29 12:07:16.138437 zram_generator::config[2731]: No configuration found. Jan 29 12:07:16.235278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:07:16.319575 systemd[1]: Reloading finished in 284 ms. Jan 29 12:07:16.367596 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:16.381130 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:07:16.381525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:16.381616 systemd[1]: kubelet.service: Consumed 1.336s CPU time, 113.9M memory peak, 0B memory swap peak. Jan 29 12:07:16.389957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:07:16.506808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:07:16.511906 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:07:16.565032 kubelet[2776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:07:16.565032 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:07:16.565032 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:07:16.566363 kubelet[2776]: I0129 12:07:16.565424 2776 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:07:16.570625 kubelet[2776]: I0129 12:07:16.570104 2776 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:07:16.570770 kubelet[2776]: I0129 12:07:16.570755 2776 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:07:16.571026 kubelet[2776]: I0129 12:07:16.571007 2776 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:07:16.572814 kubelet[2776]: I0129 12:07:16.572785 2776 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:07:16.574335 kubelet[2776]: I0129 12:07:16.574312 2776 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:07:16.582325 kubelet[2776]: I0129 12:07:16.582292 2776 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:07:16.583144 kubelet[2776]: I0129 12:07:16.582961 2776 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:07:16.583222 kubelet[2776]: I0129 12:07:16.582987 2776 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-2-f17d477515","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:07:16.583222 kubelet[2776]: I0129 12:07:16.583168 2776 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:07:16.583222 kubelet[2776]: I0129 12:07:16.583177 2776 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:07:16.583222 kubelet[2776]: I0129 12:07:16.583212 2776 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:07:16.583362 kubelet[2776]: I0129 12:07:16.583305 2776 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:07:16.583362 kubelet[2776]: I0129 12:07:16.583317 2776 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:07:16.583362 kubelet[2776]: I0129 12:07:16.583341 2776 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:07:16.583362 kubelet[2776]: I0129 12:07:16.583356 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:07:16.586426 kubelet[2776]: I0129 12:07:16.585532 2776 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:07:16.586426 kubelet[2776]: I0129 12:07:16.585748 2776 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:07:16.586426 kubelet[2776]: I0129 12:07:16.586163 2776 server.go:1264] "Started kubelet" Jan 29 12:07:16.588478 kubelet[2776]: I0129 12:07:16.588382 2776 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:07:16.589261 kubelet[2776]: I0129 12:07:16.588906 2776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:07:16.590078 kubelet[2776]: I0129 12:07:16.589755 2776 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:07:16.590356 kubelet[2776]: I0129 12:07:16.589266 2776 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:07:16.590934 kubelet[2776]: I0129 12:07:16.590681 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:07:16.597027 kubelet[2776]: I0129 12:07:16.596995 2776 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:07:16.601412 kubelet[2776]: I0129 12:07:16.598912 2776 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:07:16.601412 kubelet[2776]: I0129 12:07:16.599052 2776 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:07:16.603188 kubelet[2776]: I0129 12:07:16.603061 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:07:16.606433 kubelet[2776]: I0129 12:07:16.606224 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:07:16.606433 kubelet[2776]: I0129 12:07:16.606265 2776 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:07:16.606433 kubelet[2776]: I0129 12:07:16.606280 2776 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:07:16.606433 kubelet[2776]: E0129 12:07:16.606319 2776 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:07:16.630531 kubelet[2776]: E0129 12:07:16.629856 2776 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:07:16.630531 kubelet[2776]: I0129 12:07:16.630124 2776 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:07:16.630531 kubelet[2776]: I0129 12:07:16.630136 2776 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:07:16.630531 kubelet[2776]: I0129 12:07:16.630216 2776 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:07:16.694246 kubelet[2776]: I0129 12:07:16.694191 2776 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:07:16.694246 kubelet[2776]: I0129 12:07:16.694210 2776 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:07:16.694246 kubelet[2776]: I0129 12:07:16.694232 2776 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:07:16.694653 kubelet[2776]: I0129 12:07:16.694417 2776 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:07:16.694653 kubelet[2776]: I0129 12:07:16.694431 2776 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:07:16.694653 kubelet[2776]: I0129 12:07:16.694450 2776 policy_none.go:49] "None policy: Start" Jan 29 12:07:16.695675 kubelet[2776]: I0129 12:07:16.695626 2776 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:07:16.695675 kubelet[2776]: I0129 12:07:16.695656 2776 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:07:16.695831 kubelet[2776]: I0129 12:07:16.695813 2776 state_mem.go:75] "Updated machine memory state" Jan 29 12:07:16.704892 kubelet[2776]: I0129 12:07:16.704216 2776 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:07:16.704892 kubelet[2776]: I0129 12:07:16.704425 2776 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:07:16.704892 kubelet[2776]: I0129 12:07:16.704534 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:07:16.706004 kubelet[2776]: I0129 12:07:16.705983 2776 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.707756 kubelet[2776]: I0129 12:07:16.707282 2776 topology_manager.go:215] "Topology Admit Handler" podUID="59e3b525b6bf64f15cae39b9342b9676" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.707756 kubelet[2776]: I0129 12:07:16.707401 2776 topology_manager.go:215] "Topology Admit Handler" podUID="2c24137695be8b0f524748bf812c8a73" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.708437 kubelet[2776]: I0129 12:07:16.707897 2776 topology_manager.go:215] "Topology Admit Handler" podUID="82ac880ae6e4838db939725877367ab8" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.727658 kubelet[2776]: E0129 12:07:16.727561 2776 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.730634 kubelet[2776]: I0129 12:07:16.727528 2776 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.730634 kubelet[2776]: I0129 12:07:16.730582 2776 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.899987 kubelet[2776]: I0129 12:07:16.899844 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.900334 kubelet[2776]: I0129 12:07:16.900274 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.900990 kubelet[2776]: I0129 12:07:16.900779 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2c24137695be8b0f524748bf812c8a73-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-2-f17d477515\" (UID: \"2c24137695be8b0f524748bf812c8a73\") " pod="kube-system/kube-scheduler-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.900990 kubelet[2776]: I0129 12:07:16.901047 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/82ac880ae6e4838db939725877367ab8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-2-f17d477515\" (UID: \"82ac880ae6e4838db939725877367ab8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.901883 kubelet[2776]: I0129 12:07:16.901087 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/82ac880ae6e4838db939725877367ab8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-2-f17d477515\" (UID: \"82ac880ae6e4838db939725877367ab8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.901883 kubelet[2776]: I0129 12:07:16.901822 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.902347 kubelet[2776]: I0129 12:07:16.902109 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.902617 kubelet[2776]: I0129 12:07:16.902267 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59e3b525b6bf64f15cae39b9342b9676-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-2-f17d477515\" (UID: \"59e3b525b6bf64f15cae39b9342b9676\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" Jan 29 12:07:16.902617 kubelet[2776]: I0129 12:07:16.902535 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/82ac880ae6e4838db939725877367ab8-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-2-f17d477515\" (UID: \"82ac880ae6e4838db939725877367ab8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-2-f17d477515" Jan 29 12:07:17.032029 sudo[2810]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 12:07:17.033200 sudo[2810]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 12:07:17.507452 sudo[2810]: pam_unix(sudo:session): session closed for user root Jan 29 12:07:17.584949 kubelet[2776]: I0129 12:07:17.584706 2776 apiserver.go:52] "Watching apiserver" Jan 29 12:07:17.599704 kubelet[2776]: I0129 12:07:17.599651 2776 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:07:17.704573 kubelet[2776]: I0129 12:07:17.704507 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-2-f17d477515" podStartSLOduration=1.704463042 podStartE2EDuration="1.704463042s" podCreationTimestamp="2025-01-29 12:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:07:17.703236454 +0000 UTC m=+1.187301463" watchObservedRunningTime="2025-01-29 12:07:17.704463042 +0000 UTC m=+1.188528051" Jan 29 12:07:17.727508 kubelet[2776]: I0129 12:07:17.727445 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-2-f17d477515" podStartSLOduration=3.72742454 podStartE2EDuration="3.72742454s" podCreationTimestamp="2025-01-29 12:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:07:17.714645063 +0000 UTC m=+1.198710112" watchObservedRunningTime="2025-01-29 12:07:17.72742454 +0000 UTC m=+1.211489549" Jan 29 12:07:17.740691 kubelet[2776]: I0129 12:07:17.740623 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-2-f17d477515" podStartSLOduration=1.740605212 podStartE2EDuration="1.740605212s" podCreationTimestamp="2025-01-29 12:07:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:07:17.728070693 +0000 UTC m=+1.212135702" watchObservedRunningTime="2025-01-29 12:07:17.740605212 +0000 UTC m=+1.224670221" Jan 29 12:07:19.207078 sudo[1882]: pam_unix(sudo:session): session closed for user root Jan 29 12:07:19.365663 sshd[1879]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:19.369106 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:07:19.370003 systemd[1]: sshd@6-159.69.53.160:22-139.178.89.65:33186.service: Deactivated successfully. Jan 29 12:07:19.371694 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:07:19.371842 systemd[1]: session-7.scope: Consumed 8.588s CPU time, 187.9M memory peak, 0B memory swap peak. Jan 29 12:07:19.373801 systemd-logind[1459]: Removed session 7. Jan 29 12:07:29.971898 kubelet[2776]: I0129 12:07:29.971862 2776 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:07:29.973988 containerd[1474]: time="2025-01-29T12:07:29.973950072Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:07:29.974679 kubelet[2776]: I0129 12:07:29.974450 2776 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:07:30.946591 kubelet[2776]: I0129 12:07:30.946539 2776 topology_manager.go:215] "Topology Admit Handler" podUID="91e5da30-0ab2-45b1-8a66-d8283a7d505e" podNamespace="kube-system" podName="kube-proxy-2qvzw" Jan 29 12:07:30.956849 systemd[1]: Created slice kubepods-besteffort-pod91e5da30_0ab2_45b1_8a66_d8283a7d505e.slice - libcontainer container kubepods-besteffort-pod91e5da30_0ab2_45b1_8a66_d8283a7d505e.slice. Jan 29 12:07:30.978491 kubelet[2776]: I0129 12:07:30.978441 2776 topology_manager.go:215] "Topology Admit Handler" podUID="35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" podNamespace="kube-system" podName="cilium-xqsw8" Jan 29 12:07:30.989644 systemd[1]: Created slice kubepods-burstable-pod35eaa9a9_6c2c_4b43_876b_984a07d9b4b4.slice - libcontainer container kubepods-burstable-pod35eaa9a9_6c2c_4b43_876b_984a07d9b4b4.slice. Jan 29 12:07:30.991616 kubelet[2776]: I0129 12:07:30.991579 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91e5da30-0ab2-45b1-8a66-d8283a7d505e-kube-proxy\") pod \"kube-proxy-2qvzw\" (UID: \"91e5da30-0ab2-45b1-8a66-d8283a7d505e\") " pod="kube-system/kube-proxy-2qvzw" Jan 29 12:07:30.991718 kubelet[2776]: I0129 12:07:30.991619 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-run\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991718 kubelet[2776]: I0129 12:07:30.991639 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-host-proc-sys-net\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991718 kubelet[2776]: I0129 12:07:30.991656 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-cgroup\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991718 kubelet[2776]: I0129 12:07:30.991671 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-host-proc-sys-kernel\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991718 kubelet[2776]: I0129 12:07:30.991687 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91e5da30-0ab2-45b1-8a66-d8283a7d505e-xtables-lock\") pod \"kube-proxy-2qvzw\" (UID: \"91e5da30-0ab2-45b1-8a66-d8283a7d505e\") " pod="kube-system/kube-proxy-2qvzw" Jan 29 12:07:30.991718 kubelet[2776]: I0129 12:07:30.991704 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-xtables-lock\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991852 kubelet[2776]: I0129 12:07:30.991718 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-clustermesh-secrets\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991852 kubelet[2776]: I0129 12:07:30.991732 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-config-path\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991852 kubelet[2776]: I0129 12:07:30.991746 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-hubble-tls\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991852 kubelet[2776]: I0129 12:07:30.991760 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-bpf-maps\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991852 kubelet[2776]: I0129 12:07:30.991774 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-etc-cni-netd\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991852 kubelet[2776]: I0129 12:07:30.991787 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91e5da30-0ab2-45b1-8a66-d8283a7d505e-lib-modules\") pod \"kube-proxy-2qvzw\" (UID: \"91e5da30-0ab2-45b1-8a66-d8283a7d505e\") " pod="kube-system/kube-proxy-2qvzw" Jan 29 12:07:30.991981 kubelet[2776]: I0129 12:07:30.991812 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-hostproc\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991981 kubelet[2776]: I0129 12:07:30.991827 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cni-path\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991981 kubelet[2776]: I0129 12:07:30.991842 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm2fs\" (UniqueName: \"kubernetes.io/projected/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-kube-api-access-jm2fs\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:30.991981 kubelet[2776]: I0129 12:07:30.991857 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffvtr\" (UniqueName: \"kubernetes.io/projected/91e5da30-0ab2-45b1-8a66-d8283a7d505e-kube-api-access-ffvtr\") pod \"kube-proxy-2qvzw\" (UID: \"91e5da30-0ab2-45b1-8a66-d8283a7d505e\") " pod="kube-system/kube-proxy-2qvzw" Jan 29 12:07:30.991981 kubelet[2776]: I0129 12:07:30.991874 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-lib-modules\") pod \"cilium-xqsw8\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " pod="kube-system/cilium-xqsw8" Jan 29 12:07:31.152607 kubelet[2776]: I0129 12:07:31.151917 2776 topology_manager.go:215] "Topology Admit Handler" podUID="3615d730-5b8f-42e9-808e-93298480ea8f" podNamespace="kube-system" podName="cilium-operator-599987898-htxsn" Jan 29 12:07:31.159711 systemd[1]: Created slice kubepods-besteffort-pod3615d730_5b8f_42e9_808e_93298480ea8f.slice - libcontainer container kubepods-besteffort-pod3615d730_5b8f_42e9_808e_93298480ea8f.slice. Jan 29 12:07:31.193597 kubelet[2776]: I0129 12:07:31.193497 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3615d730-5b8f-42e9-808e-93298480ea8f-cilium-config-path\") pod \"cilium-operator-599987898-htxsn\" (UID: \"3615d730-5b8f-42e9-808e-93298480ea8f\") " pod="kube-system/cilium-operator-599987898-htxsn" Jan 29 12:07:31.193597 kubelet[2776]: I0129 12:07:31.193551 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n8g8\" (UniqueName: \"kubernetes.io/projected/3615d730-5b8f-42e9-808e-93298480ea8f-kube-api-access-6n8g8\") pod \"cilium-operator-599987898-htxsn\" (UID: \"3615d730-5b8f-42e9-808e-93298480ea8f\") " pod="kube-system/cilium-operator-599987898-htxsn" Jan 29 12:07:31.265903 containerd[1474]: time="2025-01-29T12:07:31.265094212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2qvzw,Uid:91e5da30-0ab2-45b1-8a66-d8283a7d505e,Namespace:kube-system,Attempt:0,}" Jan 29 12:07:31.291195 containerd[1474]: time="2025-01-29T12:07:31.290799273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:07:31.291195 containerd[1474]: time="2025-01-29T12:07:31.290886472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:07:31.291195 containerd[1474]: time="2025-01-29T12:07:31.290910072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:31.291195 containerd[1474]: time="2025-01-29T12:07:31.291005672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:31.295588 containerd[1474]: time="2025-01-29T12:07:31.295336841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqsw8,Uid:35eaa9a9-6c2c-4b43-876b-984a07d9b4b4,Namespace:kube-system,Attempt:0,}" Jan 29 12:07:31.316600 systemd[1]: Started cri-containerd-47cb5ff9c8015c367238fd3f7fbea13f4694a713ae2d902ce176349d36818f0e.scope - libcontainer container 47cb5ff9c8015c367238fd3f7fbea13f4694a713ae2d902ce176349d36818f0e. Jan 29 12:07:31.327151 containerd[1474]: time="2025-01-29T12:07:31.327058580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:07:31.327456 containerd[1474]: time="2025-01-29T12:07:31.327252578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:07:31.327456 containerd[1474]: time="2025-01-29T12:07:31.327289018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:31.327909 containerd[1474]: time="2025-01-29T12:07:31.327866734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:31.349044 containerd[1474]: time="2025-01-29T12:07:31.349005587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2qvzw,Uid:91e5da30-0ab2-45b1-8a66-d8283a7d505e,Namespace:kube-system,Attempt:0,} returns sandbox id \"47cb5ff9c8015c367238fd3f7fbea13f4694a713ae2d902ce176349d36818f0e\"" Jan 29 12:07:31.354518 containerd[1474]: time="2025-01-29T12:07:31.354379909Z" level=info msg="CreateContainer within sandbox \"47cb5ff9c8015c367238fd3f7fbea13f4694a713ae2d902ce176349d36818f0e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:07:31.361605 systemd[1]: Started cri-containerd-cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac.scope - libcontainer container cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac. Jan 29 12:07:31.375473 containerd[1474]: time="2025-01-29T12:07:31.375358723Z" level=info msg="CreateContainer within sandbox \"47cb5ff9c8015c367238fd3f7fbea13f4694a713ae2d902ce176349d36818f0e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"03110029c1c768dda9cc1eac9c97a096f4ae0ca8c443e6bb5eef5c7e814c6dfb\"" Jan 29 12:07:31.378866 containerd[1474]: time="2025-01-29T12:07:31.377427708Z" level=info msg="StartContainer for \"03110029c1c768dda9cc1eac9c97a096f4ae0ca8c443e6bb5eef5c7e814c6dfb\"" Jan 29 12:07:31.393940 containerd[1474]: time="2025-01-29T12:07:31.393885873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqsw8,Uid:35eaa9a9-6c2c-4b43-876b-984a07d9b4b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\"" Jan 29 12:07:31.396364 containerd[1474]: time="2025-01-29T12:07:31.396235657Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 12:07:31.415766 systemd[1]: Started cri-containerd-03110029c1c768dda9cc1eac9c97a096f4ae0ca8c443e6bb5eef5c7e814c6dfb.scope - libcontainer container 03110029c1c768dda9cc1eac9c97a096f4ae0ca8c443e6bb5eef5c7e814c6dfb. Jan 29 12:07:31.448305 containerd[1474]: time="2025-01-29T12:07:31.448204654Z" level=info msg="StartContainer for \"03110029c1c768dda9cc1eac9c97a096f4ae0ca8c443e6bb5eef5c7e814c6dfb\" returns successfully" Jan 29 12:07:31.464663 containerd[1474]: time="2025-01-29T12:07:31.464140183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-htxsn,Uid:3615d730-5b8f-42e9-808e-93298480ea8f,Namespace:kube-system,Attempt:0,}" Jan 29 12:07:31.488720 containerd[1474]: time="2025-01-29T12:07:31.488378013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:07:31.488720 containerd[1474]: time="2025-01-29T12:07:31.488460413Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:07:31.488720 containerd[1474]: time="2025-01-29T12:07:31.488476493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:31.488720 containerd[1474]: time="2025-01-29T12:07:31.488564652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:31.514754 systemd[1]: Started cri-containerd-16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c.scope - libcontainer container 16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c. Jan 29 12:07:31.556477 containerd[1474]: time="2025-01-29T12:07:31.555654624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-htxsn,Uid:3615d730-5b8f-42e9-808e-93298480ea8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\"" Jan 29 12:07:36.624055 kubelet[2776]: I0129 12:07:36.623945 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2qvzw" podStartSLOduration=6.623908862 podStartE2EDuration="6.623908862s" podCreationTimestamp="2025-01-29 12:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:07:31.724851322 +0000 UTC m=+15.208916411" watchObservedRunningTime="2025-01-29 12:07:36.623908862 +0000 UTC m=+20.107973871" Jan 29 12:07:39.902763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569985226.mount: Deactivated successfully. Jan 29 12:07:41.248476 containerd[1474]: time="2025-01-29T12:07:41.248344372Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:41.249749 containerd[1474]: time="2025-01-29T12:07:41.249707805Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 12:07:41.251457 containerd[1474]: time="2025-01-29T12:07:41.250526720Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:41.252901 containerd[1474]: time="2025-01-29T12:07:41.251914672Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.855640495s" Jan 29 12:07:41.252901 containerd[1474]: time="2025-01-29T12:07:41.251949712Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 12:07:41.256578 containerd[1474]: time="2025-01-29T12:07:41.256523966Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 12:07:41.257880 containerd[1474]: time="2025-01-29T12:07:41.257838278Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:07:41.272831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769789145.mount: Deactivated successfully. Jan 29 12:07:41.275163 containerd[1474]: time="2025-01-29T12:07:41.275121300Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\"" Jan 29 12:07:41.277177 containerd[1474]: time="2025-01-29T12:07:41.275983056Z" level=info msg="StartContainer for \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\"" Jan 29 12:07:41.308606 systemd[1]: Started cri-containerd-eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049.scope - libcontainer container eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049. Jan 29 12:07:41.337238 containerd[1474]: time="2025-01-29T12:07:41.337196749Z" level=info msg="StartContainer for \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\" returns successfully" Jan 29 12:07:41.357146 systemd[1]: cri-containerd-eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049.scope: Deactivated successfully. Jan 29 12:07:41.429726 containerd[1474]: time="2025-01-29T12:07:41.429637785Z" level=info msg="shim disconnected" id=eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049 namespace=k8s.io Jan 29 12:07:41.429726 containerd[1474]: time="2025-01-29T12:07:41.429708184Z" level=warning msg="cleaning up after shim disconnected" id=eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049 namespace=k8s.io Jan 29 12:07:41.429726 containerd[1474]: time="2025-01-29T12:07:41.429716864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:41.442937 containerd[1474]: time="2025-01-29T12:07:41.442821430Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:07:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:07:41.747304 containerd[1474]: time="2025-01-29T12:07:41.747092346Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:07:41.769996 containerd[1474]: time="2025-01-29T12:07:41.769918696Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\"" Jan 29 12:07:41.772663 containerd[1474]: time="2025-01-29T12:07:41.771765166Z" level=info msg="StartContainer for \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\"" Jan 29 12:07:41.798615 systemd[1]: Started cri-containerd-e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da.scope - libcontainer container e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da. Jan 29 12:07:41.827814 containerd[1474]: time="2025-01-29T12:07:41.827771248Z" level=info msg="StartContainer for \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\" returns successfully" Jan 29 12:07:41.837359 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:07:41.837611 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:07:41.837689 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:07:41.845744 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:07:41.845930 systemd[1]: cri-containerd-e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da.scope: Deactivated successfully. Jan 29 12:07:41.876257 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:07:41.885659 containerd[1474]: time="2025-01-29T12:07:41.885499241Z" level=info msg="shim disconnected" id=e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da namespace=k8s.io Jan 29 12:07:41.885659 containerd[1474]: time="2025-01-29T12:07:41.885567881Z" level=warning msg="cleaning up after shim disconnected" id=e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da namespace=k8s.io Jan 29 12:07:41.885659 containerd[1474]: time="2025-01-29T12:07:41.885580681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:42.268837 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049-rootfs.mount: Deactivated successfully. Jan 29 12:07:42.751441 containerd[1474]: time="2025-01-29T12:07:42.751066978Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:07:42.773606 containerd[1474]: time="2025-01-29T12:07:42.773560733Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\"" Jan 29 12:07:42.774626 containerd[1474]: time="2025-01-29T12:07:42.774578967Z" level=info msg="StartContainer for \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\"" Jan 29 12:07:42.809634 systemd[1]: Started cri-containerd-13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623.scope - libcontainer container 13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623. Jan 29 12:07:42.847757 containerd[1474]: time="2025-01-29T12:07:42.846935565Z" level=info msg="StartContainer for \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\" returns successfully" Jan 29 12:07:42.853838 systemd[1]: cri-containerd-13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623.scope: Deactivated successfully. Jan 29 12:07:42.889421 containerd[1474]: time="2025-01-29T12:07:42.889013931Z" level=info msg="shim disconnected" id=13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623 namespace=k8s.io Jan 29 12:07:42.889421 containerd[1474]: time="2025-01-29T12:07:42.889114130Z" level=warning msg="cleaning up after shim disconnected" id=13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623 namespace=k8s.io Jan 29 12:07:42.889421 containerd[1474]: time="2025-01-29T12:07:42.889122690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:43.269528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623-rootfs.mount: Deactivated successfully. Jan 29 12:07:43.760172 containerd[1474]: time="2025-01-29T12:07:43.760113370Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:07:43.779336 containerd[1474]: time="2025-01-29T12:07:43.779273506Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\"" Jan 29 12:07:43.780180 containerd[1474]: time="2025-01-29T12:07:43.780120741Z" level=info msg="StartContainer for \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\"" Jan 29 12:07:43.836290 systemd[1]: Started cri-containerd-3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8.scope - libcontainer container 3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8. Jan 29 12:07:43.867154 systemd[1]: cri-containerd-3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8.scope: Deactivated successfully. Jan 29 12:07:43.867449 containerd[1474]: time="2025-01-29T12:07:43.867185507Z" level=info msg="StartContainer for \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\" returns successfully" Jan 29 12:07:43.896243 containerd[1474]: time="2025-01-29T12:07:43.896133869Z" level=info msg="shim disconnected" id=3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8 namespace=k8s.io Jan 29 12:07:43.896243 containerd[1474]: time="2025-01-29T12:07:43.896190189Z" level=warning msg="cleaning up after shim disconnected" id=3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8 namespace=k8s.io Jan 29 12:07:43.896243 containerd[1474]: time="2025-01-29T12:07:43.896198829Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:07:44.269704 systemd[1]: run-containerd-runc-k8s.io-3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8-runc.Q03QVb.mount: Deactivated successfully. Jan 29 12:07:44.269864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8-rootfs.mount: Deactivated successfully. Jan 29 12:07:44.764472 containerd[1474]: time="2025-01-29T12:07:44.763914097Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:07:44.785738 containerd[1474]: time="2025-01-29T12:07:44.785674421Z" level=info msg="CreateContainer within sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\"" Jan 29 12:07:44.786542 containerd[1474]: time="2025-01-29T12:07:44.786503856Z" level=info msg="StartContainer for \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\"" Jan 29 12:07:44.822714 systemd[1]: Started cri-containerd-98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d.scope - libcontainer container 98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d. Jan 29 12:07:44.854920 containerd[1474]: time="2025-01-29T12:07:44.854843331Z" level=info msg="StartContainer for \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\" returns successfully" Jan 29 12:07:45.004103 kubelet[2776]: I0129 12:07:45.004063 2776 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:07:45.033883 kubelet[2776]: I0129 12:07:45.033041 2776 topology_manager.go:215] "Topology Admit Handler" podUID="f153bc98-7acd-402d-94c3-2a719309ddba" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zfx25" Jan 29 12:07:45.037509 kubelet[2776]: I0129 12:07:45.035215 2776 topology_manager.go:215] "Topology Admit Handler" podUID="4d34577e-fbcd-4caa-bdcd-592622f67430" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2rvxk" Jan 29 12:07:45.051583 systemd[1]: Created slice kubepods-burstable-podf153bc98_7acd_402d_94c3_2a719309ddba.slice - libcontainer container kubepods-burstable-podf153bc98_7acd_402d_94c3_2a719309ddba.slice. Jan 29 12:07:45.059639 systemd[1]: Created slice kubepods-burstable-pod4d34577e_fbcd_4caa_bdcd_592622f67430.slice - libcontainer container kubepods-burstable-pod4d34577e_fbcd_4caa_bdcd_592622f67430.slice. Jan 29 12:07:45.187666 kubelet[2776]: I0129 12:07:45.187625 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvrl7\" (UniqueName: \"kubernetes.io/projected/4d34577e-fbcd-4caa-bdcd-592622f67430-kube-api-access-nvrl7\") pod \"coredns-7db6d8ff4d-2rvxk\" (UID: \"4d34577e-fbcd-4caa-bdcd-592622f67430\") " pod="kube-system/coredns-7db6d8ff4d-2rvxk" Jan 29 12:07:45.187953 kubelet[2776]: I0129 12:07:45.187909 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z89mz\" (UniqueName: \"kubernetes.io/projected/f153bc98-7acd-402d-94c3-2a719309ddba-kube-api-access-z89mz\") pod \"coredns-7db6d8ff4d-zfx25\" (UID: \"f153bc98-7acd-402d-94c3-2a719309ddba\") " pod="kube-system/coredns-7db6d8ff4d-zfx25" Jan 29 12:07:45.188205 kubelet[2776]: I0129 12:07:45.188188 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d34577e-fbcd-4caa-bdcd-592622f67430-config-volume\") pod \"coredns-7db6d8ff4d-2rvxk\" (UID: \"4d34577e-fbcd-4caa-bdcd-592622f67430\") " pod="kube-system/coredns-7db6d8ff4d-2rvxk" Jan 29 12:07:45.188473 kubelet[2776]: I0129 12:07:45.188392 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f153bc98-7acd-402d-94c3-2a719309ddba-config-volume\") pod \"coredns-7db6d8ff4d-zfx25\" (UID: \"f153bc98-7acd-402d-94c3-2a719309ddba\") " pod="kube-system/coredns-7db6d8ff4d-zfx25" Jan 29 12:07:45.270637 systemd[1]: run-containerd-runc-k8s.io-98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d-runc.Ra36Qg.mount: Deactivated successfully. Jan 29 12:07:45.358077 containerd[1474]: time="2025-01-29T12:07:45.357329399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zfx25,Uid:f153bc98-7acd-402d-94c3-2a719309ddba,Namespace:kube-system,Attempt:0,}" Jan 29 12:07:45.364623 containerd[1474]: time="2025-01-29T12:07:45.363672886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2rvxk,Uid:4d34577e-fbcd-4caa-bdcd-592622f67430,Namespace:kube-system,Attempt:0,}" Jan 29 12:07:45.789532 kubelet[2776]: I0129 12:07:45.789205 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xqsw8" podStartSLOduration=5.930735213 podStartE2EDuration="15.789188012s" podCreationTimestamp="2025-01-29 12:07:30 +0000 UTC" firstStartedPulling="2025-01-29 12:07:31.395630061 +0000 UTC m=+14.879695070" lastFinishedPulling="2025-01-29 12:07:41.25408286 +0000 UTC m=+24.738147869" observedRunningTime="2025-01-29 12:07:45.784617596 +0000 UTC m=+29.268682645" watchObservedRunningTime="2025-01-29 12:07:45.789188012 +0000 UTC m=+29.273253021" Jan 29 12:07:48.578450 containerd[1474]: time="2025-01-29T12:07:48.577061791Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:48.579940 containerd[1474]: time="2025-01-29T12:07:48.579881137Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 12:07:48.580811 containerd[1474]: time="2025-01-29T12:07:48.580764252Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:07:48.583161 containerd[1474]: time="2025-01-29T12:07:48.583133040Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 7.326565394s" Jan 29 12:07:48.583265 containerd[1474]: time="2025-01-29T12:07:48.583248440Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 12:07:48.586846 containerd[1474]: time="2025-01-29T12:07:48.586810182Z" level=info msg="CreateContainer within sandbox \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 12:07:48.604235 containerd[1474]: time="2025-01-29T12:07:48.604193256Z" level=info msg="CreateContainer within sandbox \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\"" Jan 29 12:07:48.605626 containerd[1474]: time="2025-01-29T12:07:48.605599729Z" level=info msg="StartContainer for \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\"" Jan 29 12:07:48.634728 systemd[1]: run-containerd-runc-k8s.io-b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd-runc.ZPvvy5.mount: Deactivated successfully. Jan 29 12:07:48.645596 systemd[1]: Started cri-containerd-b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd.scope - libcontainer container b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd. Jan 29 12:07:48.673352 containerd[1474]: time="2025-01-29T12:07:48.673225993Z" level=info msg="StartContainer for \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\" returns successfully" Jan 29 12:07:48.788189 kubelet[2776]: I0129 12:07:48.787572 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-htxsn" podStartSLOduration=0.761407797 podStartE2EDuration="17.787543065s" podCreationTimestamp="2025-01-29 12:07:31 +0000 UTC" firstStartedPulling="2025-01-29 12:07:31.557752849 +0000 UTC m=+15.041817858" lastFinishedPulling="2025-01-29 12:07:48.583888117 +0000 UTC m=+32.067953126" observedRunningTime="2025-01-29 12:07:48.786883108 +0000 UTC m=+32.270948117" watchObservedRunningTime="2025-01-29 12:07:48.787543065 +0000 UTC m=+32.271608034" Jan 29 12:07:52.845210 systemd-networkd[1376]: cilium_host: Link UP Jan 29 12:07:52.845338 systemd-networkd[1376]: cilium_net: Link UP Jan 29 12:07:52.845487 systemd-networkd[1376]: cilium_net: Gained carrier Jan 29 12:07:52.845610 systemd-networkd[1376]: cilium_host: Gained carrier Jan 29 12:07:52.936867 systemd-networkd[1376]: cilium_host: Gained IPv6LL Jan 29 12:07:52.957371 systemd-networkd[1376]: cilium_vxlan: Link UP Jan 29 12:07:52.957382 systemd-networkd[1376]: cilium_vxlan: Gained carrier Jan 29 12:07:53.080780 systemd-networkd[1376]: cilium_net: Gained IPv6LL Jan 29 12:07:53.237105 kernel: NET: Registered PF_ALG protocol family Jan 29 12:07:53.951696 systemd-networkd[1376]: lxc_health: Link UP Jan 29 12:07:53.963578 systemd-networkd[1376]: lxc_health: Gained carrier Jan 29 12:07:54.443168 systemd-networkd[1376]: lxcdbd84ecca2e8: Link UP Jan 29 12:07:54.449808 systemd-networkd[1376]: lxc49477c94ca46: Link UP Jan 29 12:07:54.455752 kernel: eth0: renamed from tmp93e65 Jan 29 12:07:54.461947 kernel: eth0: renamed from tmpd5220 Jan 29 12:07:54.468703 systemd-networkd[1376]: lxc49477c94ca46: Gained carrier Jan 29 12:07:54.474234 systemd-networkd[1376]: lxcdbd84ecca2e8: Gained carrier Jan 29 12:07:54.504621 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Jan 29 12:07:55.144640 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 29 12:07:55.720977 systemd-networkd[1376]: lxc49477c94ca46: Gained IPv6LL Jan 29 12:07:55.976850 systemd-networkd[1376]: lxcdbd84ecca2e8: Gained IPv6LL Jan 29 12:07:58.266315 containerd[1474]: time="2025-01-29T12:07:58.266000958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:07:58.266315 containerd[1474]: time="2025-01-29T12:07:58.266082078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:07:58.266315 containerd[1474]: time="2025-01-29T12:07:58.266191478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:58.266698 containerd[1474]: time="2025-01-29T12:07:58.266527156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:58.291426 containerd[1474]: time="2025-01-29T12:07:58.290625775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:07:58.291426 containerd[1474]: time="2025-01-29T12:07:58.290695455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:07:58.291426 containerd[1474]: time="2025-01-29T12:07:58.290710895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:58.291426 containerd[1474]: time="2025-01-29T12:07:58.290787494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:07:58.306893 systemd[1]: run-containerd-runc-k8s.io-93e6556d7c9a4f9ccd89de133752fa17f5bd7d461e7e892e238ba3e877113101-runc.1Pjlyy.mount: Deactivated successfully. Jan 29 12:07:58.321669 systemd[1]: Started cri-containerd-93e6556d7c9a4f9ccd89de133752fa17f5bd7d461e7e892e238ba3e877113101.scope - libcontainer container 93e6556d7c9a4f9ccd89de133752fa17f5bd7d461e7e892e238ba3e877113101. Jan 29 12:07:58.325554 systemd[1]: Started cri-containerd-d5220a29d69bbba58eb6de678635b9cedab0d65cb7415ff119c88e74db707641.scope - libcontainer container d5220a29d69bbba58eb6de678635b9cedab0d65cb7415ff119c88e74db707641. Jan 29 12:07:58.387173 containerd[1474]: time="2025-01-29T12:07:58.387118090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2rvxk,Uid:4d34577e-fbcd-4caa-bdcd-592622f67430,Namespace:kube-system,Attempt:0,} returns sandbox id \"93e6556d7c9a4f9ccd89de133752fa17f5bd7d461e7e892e238ba3e877113101\"" Jan 29 12:07:58.391124 containerd[1474]: time="2025-01-29T12:07:58.390941274Z" level=info msg="CreateContainer within sandbox \"93e6556d7c9a4f9ccd89de133752fa17f5bd7d461e7e892e238ba3e877113101\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:07:58.402456 containerd[1474]: time="2025-01-29T12:07:58.402321186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zfx25,Uid:f153bc98-7acd-402d-94c3-2a719309ddba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5220a29d69bbba58eb6de678635b9cedab0d65cb7415ff119c88e74db707641\"" Jan 29 12:07:58.410152 containerd[1474]: time="2025-01-29T12:07:58.409890514Z" level=info msg="CreateContainer within sandbox \"d5220a29d69bbba58eb6de678635b9cedab0d65cb7415ff119c88e74db707641\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:07:58.416179 containerd[1474]: time="2025-01-29T12:07:58.415481851Z" level=info msg="CreateContainer within sandbox \"93e6556d7c9a4f9ccd89de133752fa17f5bd7d461e7e892e238ba3e877113101\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4061c9c1ccf1f47ba05a498aecd225053567f7c3dfbb90ec64f3473fa6dcd536\"" Jan 29 12:07:58.427092 containerd[1474]: time="2025-01-29T12:07:58.425938967Z" level=info msg="CreateContainer within sandbox \"d5220a29d69bbba58eb6de678635b9cedab0d65cb7415ff119c88e74db707641\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75e921a2ab795c2f7ec3e30dd6f3bf5875db97a470ad1d5e448cf54c44dc5ecd\"" Jan 29 12:07:58.427092 containerd[1474]: time="2025-01-29T12:07:58.426162326Z" level=info msg="StartContainer for \"4061c9c1ccf1f47ba05a498aecd225053567f7c3dfbb90ec64f3473fa6dcd536\"" Jan 29 12:07:58.432781 containerd[1474]: time="2025-01-29T12:07:58.432745258Z" level=info msg="StartContainer for \"75e921a2ab795c2f7ec3e30dd6f3bf5875db97a470ad1d5e448cf54c44dc5ecd\"" Jan 29 12:07:58.463612 systemd[1]: Started cri-containerd-4061c9c1ccf1f47ba05a498aecd225053567f7c3dfbb90ec64f3473fa6dcd536.scope - libcontainer container 4061c9c1ccf1f47ba05a498aecd225053567f7c3dfbb90ec64f3473fa6dcd536. Jan 29 12:07:58.465478 systemd[1]: Started cri-containerd-75e921a2ab795c2f7ec3e30dd6f3bf5875db97a470ad1d5e448cf54c44dc5ecd.scope - libcontainer container 75e921a2ab795c2f7ec3e30dd6f3bf5875db97a470ad1d5e448cf54c44dc5ecd. Jan 29 12:07:58.505796 containerd[1474]: time="2025-01-29T12:07:58.505007634Z" level=info msg="StartContainer for \"4061c9c1ccf1f47ba05a498aecd225053567f7c3dfbb90ec64f3473fa6dcd536\" returns successfully" Jan 29 12:07:58.509119 containerd[1474]: time="2025-01-29T12:07:58.509077417Z" level=info msg="StartContainer for \"75e921a2ab795c2f7ec3e30dd6f3bf5875db97a470ad1d5e448cf54c44dc5ecd\" returns successfully" Jan 29 12:07:58.831082 kubelet[2776]: I0129 12:07:58.830930 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2rvxk" podStartSLOduration=27.830827186 podStartE2EDuration="27.830827186s" podCreationTimestamp="2025-01-29 12:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:07:58.826785243 +0000 UTC m=+42.310850252" watchObservedRunningTime="2025-01-29 12:07:58.830827186 +0000 UTC m=+42.314892235" Jan 29 12:07:58.839261 kubelet[2776]: I0129 12:07:58.839201 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zfx25" podStartSLOduration=27.839181871 podStartE2EDuration="27.839181871s" podCreationTimestamp="2025-01-29 12:07:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:07:58.838998711 +0000 UTC m=+42.323063720" watchObservedRunningTime="2025-01-29 12:07:58.839181871 +0000 UTC m=+42.323246840" Jan 29 12:09:18.955710 systemd[1]: Started sshd@7-159.69.53.160:22-194.0.234.38:63188.service - OpenSSH per-connection server daemon (194.0.234.38:63188). Jan 29 12:09:19.364903 sshd[4165]: Invalid user vpn from 194.0.234.38 port 63188 Jan 29 12:09:19.403648 sshd[4165]: Connection closed by invalid user vpn 194.0.234.38 port 63188 [preauth] Jan 29 12:09:19.407809 systemd[1]: sshd@7-159.69.53.160:22-194.0.234.38:63188.service: Deactivated successfully. Jan 29 12:11:12.414862 update_engine[1460]: I20250129 12:11:12.414762 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 12:11:12.414862 update_engine[1460]: I20250129 12:11:12.414834 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.415112 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.415724 1460 omaha_request_params.cc:62] Current group set to lts Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.415851 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.415865 1460 update_attempter.cc:643] Scheduling an action processor start. Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.415946 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.415993 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.416069 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.416081 1460 omaha_request_action.cc:272] Request: Jan 29 12:11:12.417990 update_engine[1460]: Jan 29 12:11:12.417990 update_engine[1460]: Jan 29 12:11:12.417990 update_engine[1460]: Jan 29 12:11:12.417990 update_engine[1460]: Jan 29 12:11:12.417990 update_engine[1460]: Jan 29 12:11:12.417990 update_engine[1460]: Jan 29 12:11:12.417990 update_engine[1460]: Jan 29 12:11:12.417990 update_engine[1460]: Jan 29 12:11:12.417990 update_engine[1460]: I20250129 12:11:12.416090 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:11:12.418431 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 12:11:12.418653 update_engine[1460]: I20250129 12:11:12.418562 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:11:12.419012 update_engine[1460]: I20250129 12:11:12.418959 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:11:12.420010 update_engine[1460]: E20250129 12:11:12.419977 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:11:12.420074 update_engine[1460]: I20250129 12:11:12.420050 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 12:11:22.326980 update_engine[1460]: I20250129 12:11:22.326845 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:11:22.327871 update_engine[1460]: I20250129 12:11:22.327111 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:11:22.327871 update_engine[1460]: I20250129 12:11:22.327337 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:11:22.328153 update_engine[1460]: E20250129 12:11:22.328070 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:11:22.328153 update_engine[1460]: I20250129 12:11:22.328138 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 12:11:32.330164 update_engine[1460]: I20250129 12:11:32.330060 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:11:32.330581 update_engine[1460]: I20250129 12:11:32.330384 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:11:32.330775 update_engine[1460]: I20250129 12:11:32.330726 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:11:32.331637 update_engine[1460]: E20250129 12:11:32.331583 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:11:32.331737 update_engine[1460]: I20250129 12:11:32.331668 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 12:11:42.325290 update_engine[1460]: I20250129 12:11:42.325176 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:11:42.325861 update_engine[1460]: I20250129 12:11:42.325616 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:11:42.325861 update_engine[1460]: I20250129 12:11:42.325847 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:11:42.327116 update_engine[1460]: E20250129 12:11:42.327077 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:11:42.327232 update_engine[1460]: I20250129 12:11:42.327142 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:11:42.327232 update_engine[1460]: I20250129 12:11:42.327169 1460 omaha_request_action.cc:617] Omaha request response: Jan 29 12:11:42.327295 update_engine[1460]: E20250129 12:11:42.327251 1460 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 12:11:42.327295 update_engine[1460]: I20250129 12:11:42.327270 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 12:11:42.327295 update_engine[1460]: I20250129 12:11:42.327275 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:11:42.327295 update_engine[1460]: I20250129 12:11:42.327281 1460 update_attempter.cc:306] Processing Done. Jan 29 12:11:42.327444 update_engine[1460]: E20250129 12:11:42.327296 1460 update_attempter.cc:619] Update failed. Jan 29 12:11:42.327444 update_engine[1460]: I20250129 12:11:42.327302 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 12:11:42.327444 update_engine[1460]: I20250129 12:11:42.327307 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 12:11:42.327444 update_engine[1460]: I20250129 12:11:42.327313 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 12:11:42.327444 update_engine[1460]: I20250129 12:11:42.327383 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 12:11:42.327444 update_engine[1460]: I20250129 12:11:42.327422 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 12:11:42.327444 update_engine[1460]: I20250129 12:11:42.327430 1460 omaha_request_action.cc:272] Request: Jan 29 12:11:42.327444 update_engine[1460]: Jan 29 12:11:42.327444 update_engine[1460]: Jan 29 12:11:42.327444 update_engine[1460]: Jan 29 12:11:42.327444 update_engine[1460]: Jan 29 12:11:42.327444 update_engine[1460]: Jan 29 12:11:42.327444 update_engine[1460]: Jan 29 12:11:42.327444 update_engine[1460]: I20250129 12:11:42.327436 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 12:11:42.327780 update_engine[1460]: I20250129 12:11:42.327579 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 12:11:42.327780 update_engine[1460]: I20250129 12:11:42.327729 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 12:11:42.328269 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 12:11:42.328667 update_engine[1460]: E20250129 12:11:42.328541 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 12:11:42.328667 update_engine[1460]: I20250129 12:11:42.328586 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 12:11:42.328667 update_engine[1460]: I20250129 12:11:42.328594 1460 omaha_request_action.cc:617] Omaha request response: Jan 29 12:11:42.328667 update_engine[1460]: I20250129 12:11:42.328601 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:11:42.328667 update_engine[1460]: I20250129 12:11:42.328608 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 12:11:42.328667 update_engine[1460]: I20250129 12:11:42.328612 1460 update_attempter.cc:306] Processing Done. Jan 29 12:11:42.328667 update_engine[1460]: I20250129 12:11:42.328619 1460 update_attempter.cc:310] Error event sent. Jan 29 12:11:42.328667 update_engine[1460]: I20250129 12:11:42.328629 1460 update_check_scheduler.cc:74] Next update check in 41m30s Jan 29 12:11:42.329195 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 12:12:05.380768 systemd[1]: Started sshd@8-159.69.53.160:22-139.178.89.65:57330.service - OpenSSH per-connection server daemon (139.178.89.65:57330). Jan 29 12:12:06.360286 sshd[4190]: Accepted publickey for core from 139.178.89.65 port 57330 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:06.362476 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:06.369314 systemd-logind[1459]: New session 8 of user core. Jan 29 12:12:06.374097 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:12:07.127637 sshd[4190]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:07.132748 systemd[1]: sshd@8-159.69.53.160:22-139.178.89.65:57330.service: Deactivated successfully. Jan 29 12:12:07.134888 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:12:07.135966 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:12:07.137106 systemd-logind[1459]: Removed session 8. Jan 29 12:12:12.303717 systemd[1]: Started sshd@9-159.69.53.160:22-139.178.89.65:54798.service - OpenSSH per-connection server daemon (139.178.89.65:54798). Jan 29 12:12:13.275252 sshd[4204]: Accepted publickey for core from 139.178.89.65 port 54798 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:13.277901 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:13.284239 systemd-logind[1459]: New session 9 of user core. Jan 29 12:12:13.290654 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:12:14.021321 sshd[4204]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:14.026511 systemd[1]: sshd@9-159.69.53.160:22-139.178.89.65:54798.service: Deactivated successfully. Jan 29 12:12:14.029782 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:12:14.030929 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:12:14.032494 systemd-logind[1459]: Removed session 9. Jan 29 12:12:19.197852 systemd[1]: Started sshd@10-159.69.53.160:22-139.178.89.65:54810.service - OpenSSH per-connection server daemon (139.178.89.65:54810). Jan 29 12:12:20.171832 sshd[4219]: Accepted publickey for core from 139.178.89.65 port 54810 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:20.174341 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:20.180093 systemd-logind[1459]: New session 10 of user core. Jan 29 12:12:20.188632 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:12:20.926826 sshd[4219]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:20.931497 systemd[1]: sshd@10-159.69.53.160:22-139.178.89.65:54810.service: Deactivated successfully. Jan 29 12:12:20.934642 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:12:20.937555 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:12:20.938595 systemd-logind[1459]: Removed session 10. Jan 29 12:12:21.102833 systemd[1]: Started sshd@11-159.69.53.160:22-139.178.89.65:54826.service - OpenSSH per-connection server daemon (139.178.89.65:54826). Jan 29 12:12:22.080454 sshd[4233]: Accepted publickey for core from 139.178.89.65 port 54826 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:22.082855 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:22.088010 systemd-logind[1459]: New session 11 of user core. Jan 29 12:12:22.097733 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:12:22.884872 sshd[4233]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:22.890444 systemd[1]: sshd@11-159.69.53.160:22-139.178.89.65:54826.service: Deactivated successfully. Jan 29 12:12:22.893077 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:12:22.894907 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:12:22.895984 systemd-logind[1459]: Removed session 11. Jan 29 12:12:23.067794 systemd[1]: Started sshd@12-159.69.53.160:22-139.178.89.65:54116.service - OpenSSH per-connection server daemon (139.178.89.65:54116). Jan 29 12:12:24.055916 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 54116 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:24.058014 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:24.064027 systemd-logind[1459]: New session 12 of user core. Jan 29 12:12:24.070660 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:12:24.818781 sshd[4245]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:24.824506 systemd[1]: sshd@12-159.69.53.160:22-139.178.89.65:54116.service: Deactivated successfully. Jan 29 12:12:24.827162 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:12:24.829349 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:12:24.830985 systemd-logind[1459]: Removed session 12. Jan 29 12:12:29.997705 systemd[1]: Started sshd@13-159.69.53.160:22-139.178.89.65:54126.service - OpenSSH per-connection server daemon (139.178.89.65:54126). Jan 29 12:12:30.979978 sshd[4258]: Accepted publickey for core from 139.178.89.65 port 54126 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:30.983624 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:30.988503 systemd-logind[1459]: New session 13 of user core. Jan 29 12:12:30.993560 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:12:31.730079 sshd[4258]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:31.736353 systemd[1]: sshd@13-159.69.53.160:22-139.178.89.65:54126.service: Deactivated successfully. Jan 29 12:12:31.743182 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:12:31.745161 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:12:31.747151 systemd-logind[1459]: Removed session 13. Jan 29 12:12:31.899569 systemd[1]: Started sshd@14-159.69.53.160:22-139.178.89.65:47814.service - OpenSSH per-connection server daemon (139.178.89.65:47814). Jan 29 12:12:32.887140 sshd[4273]: Accepted publickey for core from 139.178.89.65 port 47814 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:32.889516 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:32.896359 systemd-logind[1459]: New session 14 of user core. Jan 29 12:12:32.900692 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:12:33.689148 sshd[4273]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:33.695250 systemd[1]: sshd@14-159.69.53.160:22-139.178.89.65:47814.service: Deactivated successfully. Jan 29 12:12:33.698651 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:12:33.699679 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:12:33.700932 systemd-logind[1459]: Removed session 14. Jan 29 12:12:33.863786 systemd[1]: Started sshd@15-159.69.53.160:22-139.178.89.65:47826.service - OpenSSH per-connection server daemon (139.178.89.65:47826). Jan 29 12:12:34.828643 sshd[4284]: Accepted publickey for core from 139.178.89.65 port 47826 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:34.831240 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:34.838006 systemd-logind[1459]: New session 15 of user core. Jan 29 12:12:34.843683 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:12:37.199821 sshd[4284]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:37.205023 systemd[1]: sshd@15-159.69.53.160:22-139.178.89.65:47826.service: Deactivated successfully. Jan 29 12:12:37.207843 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:12:37.208645 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:12:37.210938 systemd-logind[1459]: Removed session 15. Jan 29 12:12:37.380054 systemd[1]: Started sshd@16-159.69.53.160:22-139.178.89.65:47830.service - OpenSSH per-connection server daemon (139.178.89.65:47830). Jan 29 12:12:38.372450 sshd[4302]: Accepted publickey for core from 139.178.89.65 port 47830 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:38.374294 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:38.379774 systemd-logind[1459]: New session 16 of user core. Jan 29 12:12:38.385858 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:12:39.244185 sshd[4302]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:39.248382 systemd[1]: sshd@16-159.69.53.160:22-139.178.89.65:47830.service: Deactivated successfully. Jan 29 12:12:39.251716 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:12:39.254167 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:12:39.255229 systemd-logind[1459]: Removed session 16. Jan 29 12:12:39.421949 systemd[1]: Started sshd@17-159.69.53.160:22-139.178.89.65:47840.service - OpenSSH per-connection server daemon (139.178.89.65:47840). Jan 29 12:12:40.398138 sshd[4313]: Accepted publickey for core from 139.178.89.65 port 47840 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:40.400660 sshd[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:40.406711 systemd-logind[1459]: New session 17 of user core. Jan 29 12:12:40.413004 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:12:41.146631 sshd[4313]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:41.151734 systemd[1]: sshd@17-159.69.53.160:22-139.178.89.65:47840.service: Deactivated successfully. Jan 29 12:12:41.154258 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:12:41.157169 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:12:41.158248 systemd-logind[1459]: Removed session 17. Jan 29 12:12:46.324884 systemd[1]: Started sshd@18-159.69.53.160:22-139.178.89.65:32984.service - OpenSSH per-connection server daemon (139.178.89.65:32984). Jan 29 12:12:47.312191 sshd[4329]: Accepted publickey for core from 139.178.89.65 port 32984 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:47.314578 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:47.320779 systemd-logind[1459]: New session 18 of user core. Jan 29 12:12:47.326708 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:12:48.067043 sshd[4329]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:48.071649 systemd[1]: sshd@18-159.69.53.160:22-139.178.89.65:32984.service: Deactivated successfully. Jan 29 12:12:48.074395 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:12:48.076141 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:12:48.077683 systemd-logind[1459]: Removed session 18. Jan 29 12:12:53.243881 systemd[1]: Started sshd@19-159.69.53.160:22-139.178.89.65:49906.service - OpenSSH per-connection server daemon (139.178.89.65:49906). Jan 29 12:12:54.236683 sshd[4343]: Accepted publickey for core from 139.178.89.65 port 49906 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:54.239191 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:54.245726 systemd-logind[1459]: New session 19 of user core. Jan 29 12:12:54.251562 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:12:54.985204 sshd[4343]: pam_unix(sshd:session): session closed for user core Jan 29 12:12:54.989659 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:12:54.990448 systemd[1]: sshd@19-159.69.53.160:22-139.178.89.65:49906.service: Deactivated successfully. Jan 29 12:12:54.994034 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:12:54.995134 systemd-logind[1459]: Removed session 19. Jan 29 12:12:55.158801 systemd[1]: Started sshd@20-159.69.53.160:22-139.178.89.65:49918.service - OpenSSH per-connection server daemon (139.178.89.65:49918). Jan 29 12:12:56.136519 sshd[4357]: Accepted publickey for core from 139.178.89.65 port 49918 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:12:56.138091 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:12:56.142679 systemd-logind[1459]: New session 20 of user core. Jan 29 12:12:56.149582 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:12:58.222912 containerd[1474]: time="2025-01-29T12:12:58.222200146Z" level=info msg="StopContainer for \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\" with timeout 30 (s)" Jan 29 12:12:58.224991 containerd[1474]: time="2025-01-29T12:12:58.224886386Z" level=info msg="Stop container \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\" with signal terminated" Jan 29 12:12:58.238480 containerd[1474]: time="2025-01-29T12:12:58.238433506Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:12:58.242748 systemd[1]: cri-containerd-b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd.scope: Deactivated successfully. Jan 29 12:12:58.250798 containerd[1474]: time="2025-01-29T12:12:58.250070427Z" level=info msg="StopContainer for \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\" with timeout 2 (s)" Jan 29 12:12:58.250798 containerd[1474]: time="2025-01-29T12:12:58.250383467Z" level=info msg="Stop container \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\" with signal terminated" Jan 29 12:12:58.257022 systemd-networkd[1376]: lxc_health: Link DOWN Jan 29 12:12:58.257029 systemd-networkd[1376]: lxc_health: Lost carrier Jan 29 12:12:58.285687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd-rootfs.mount: Deactivated successfully. Jan 29 12:12:58.288543 systemd[1]: cri-containerd-98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d.scope: Deactivated successfully. Jan 29 12:12:58.290662 systemd[1]: cri-containerd-98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d.scope: Consumed 7.581s CPU time. Jan 29 12:12:58.299921 containerd[1474]: time="2025-01-29T12:12:58.299850587Z" level=info msg="shim disconnected" id=b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd namespace=k8s.io Jan 29 12:12:58.300489 containerd[1474]: time="2025-01-29T12:12:58.299904987Z" level=warning msg="cleaning up after shim disconnected" id=b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd namespace=k8s.io Jan 29 12:12:58.300489 containerd[1474]: time="2025-01-29T12:12:58.300316708Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:12:58.316798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d-rootfs.mount: Deactivated successfully. Jan 29 12:12:58.319820 containerd[1474]: time="2025-01-29T12:12:58.319557188Z" level=info msg="shim disconnected" id=98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d namespace=k8s.io Jan 29 12:12:58.319820 containerd[1474]: time="2025-01-29T12:12:58.319656668Z" level=warning msg="cleaning up after shim disconnected" id=98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d namespace=k8s.io Jan 29 12:12:58.319820 containerd[1474]: time="2025-01-29T12:12:58.319665788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:12:58.322911 containerd[1474]: time="2025-01-29T12:12:58.322772628Z" level=info msg="StopContainer for \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\" returns successfully" Jan 29 12:12:58.324377 containerd[1474]: time="2025-01-29T12:12:58.324340148Z" level=info msg="StopPodSandbox for \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\"" Jan 29 12:12:58.324510 containerd[1474]: time="2025-01-29T12:12:58.324390388Z" level=info msg="Container to stop \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:12:58.327884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c-shm.mount: Deactivated successfully. Jan 29 12:12:58.337816 systemd[1]: cri-containerd-16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c.scope: Deactivated successfully. Jan 29 12:12:58.346887 containerd[1474]: time="2025-01-29T12:12:58.346778068Z" level=info msg="StopContainer for \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\" returns successfully" Jan 29 12:12:58.348800 containerd[1474]: time="2025-01-29T12:12:58.348491148Z" level=info msg="StopPodSandbox for \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\"" Jan 29 12:12:58.348800 containerd[1474]: time="2025-01-29T12:12:58.348564028Z" level=info msg="Container to stop \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:12:58.348800 containerd[1474]: time="2025-01-29T12:12:58.348592668Z" level=info msg="Container to stop \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:12:58.350054 containerd[1474]: time="2025-01-29T12:12:58.348618828Z" level=info msg="Container to stop \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:12:58.350054 containerd[1474]: time="2025-01-29T12:12:58.349875228Z" level=info msg="Container to stop \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:12:58.351071 containerd[1474]: time="2025-01-29T12:12:58.350457708Z" level=info msg="Container to stop \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:12:58.353446 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac-shm.mount: Deactivated successfully. Jan 29 12:12:58.365968 systemd[1]: cri-containerd-cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac.scope: Deactivated successfully. Jan 29 12:12:58.388764 containerd[1474]: time="2025-01-29T12:12:58.388659629Z" level=info msg="shim disconnected" id=cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac namespace=k8s.io Jan 29 12:12:58.388764 containerd[1474]: time="2025-01-29T12:12:58.388735149Z" level=warning msg="cleaning up after shim disconnected" id=cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac namespace=k8s.io Jan 29 12:12:58.388764 containerd[1474]: time="2025-01-29T12:12:58.388745029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:12:58.389928 containerd[1474]: time="2025-01-29T12:12:58.389485989Z" level=info msg="shim disconnected" id=16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c namespace=k8s.io Jan 29 12:12:58.389928 containerd[1474]: time="2025-01-29T12:12:58.389523989Z" level=warning msg="cleaning up after shim disconnected" id=16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c namespace=k8s.io Jan 29 12:12:58.389928 containerd[1474]: time="2025-01-29T12:12:58.389532869Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:12:58.407214 containerd[1474]: time="2025-01-29T12:12:58.407165349Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:12:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:12:58.408267 containerd[1474]: time="2025-01-29T12:12:58.408237429Z" level=info msg="TearDown network for sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" successfully" Jan 29 12:12:58.408381 containerd[1474]: time="2025-01-29T12:12:58.408365589Z" level=info msg="StopPodSandbox for \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" returns successfully" Jan 29 12:12:58.411634 containerd[1474]: time="2025-01-29T12:12:58.411484189Z" level=info msg="TearDown network for sandbox \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\" successfully" Jan 29 12:12:58.411634 containerd[1474]: time="2025-01-29T12:12:58.411510069Z" level=info msg="StopPodSandbox for \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\" returns successfully" Jan 29 12:12:58.527971 kubelet[2776]: I0129 12:12:58.527591 2776 scope.go:117] "RemoveContainer" containerID="98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d" Jan 29 12:12:58.531132 containerd[1474]: time="2025-01-29T12:12:58.531080231Z" level=info msg="RemoveContainer for \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\"" Jan 29 12:12:58.533441 kubelet[2776]: I0129 12:12:58.532619 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-cgroup\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533441 kubelet[2776]: I0129 12:12:58.532654 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-bpf-maps\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533441 kubelet[2776]: I0129 12:12:58.532671 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cni-path\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533441 kubelet[2776]: I0129 12:12:58.532693 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3615d730-5b8f-42e9-808e-93298480ea8f-cilium-config-path\") pod \"3615d730-5b8f-42e9-808e-93298480ea8f\" (UID: \"3615d730-5b8f-42e9-808e-93298480ea8f\") " Jan 29 12:12:58.533441 kubelet[2776]: I0129 12:12:58.532710 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-lib-modules\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533441 kubelet[2776]: I0129 12:12:58.532741 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n8g8\" (UniqueName: \"kubernetes.io/projected/3615d730-5b8f-42e9-808e-93298480ea8f-kube-api-access-6n8g8\") pod \"3615d730-5b8f-42e9-808e-93298480ea8f\" (UID: \"3615d730-5b8f-42e9-808e-93298480ea8f\") " Jan 29 12:12:58.533661 kubelet[2776]: I0129 12:12:58.532760 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-xtables-lock\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533661 kubelet[2776]: I0129 12:12:58.532778 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-clustermesh-secrets\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533661 kubelet[2776]: I0129 12:12:58.532793 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-run\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533661 kubelet[2776]: I0129 12:12:58.532811 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-hubble-tls\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533661 kubelet[2776]: I0129 12:12:58.532826 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-host-proc-sys-kernel\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533661 kubelet[2776]: I0129 12:12:58.532841 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-etc-cni-netd\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533815 kubelet[2776]: I0129 12:12:58.532856 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-hostproc\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533815 kubelet[2776]: I0129 12:12:58.532873 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jm2fs\" (UniqueName: \"kubernetes.io/projected/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-kube-api-access-jm2fs\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533815 kubelet[2776]: I0129 12:12:58.532890 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-host-proc-sys-net\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533815 kubelet[2776]: I0129 12:12:58.532906 2776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-config-path\") pod \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\" (UID: \"35eaa9a9-6c2c-4b43-876b-984a07d9b4b4\") " Jan 29 12:12:58.533815 kubelet[2776]: I0129 12:12:58.533667 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.534231 kubelet[2776]: I0129 12:12:58.534075 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.534358 kubelet[2776]: I0129 12:12:58.534109 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cni-path" (OuterVolumeSpecName: "cni-path") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.538221 kubelet[2776]: I0129 12:12:58.536064 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.538221 kubelet[2776]: I0129 12:12:58.536086 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.538798 containerd[1474]: time="2025-01-29T12:12:58.538700192Z" level=info msg="RemoveContainer for \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\" returns successfully" Jan 29 12:12:58.542077 kubelet[2776]: I0129 12:12:58.536218 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-hostproc" (OuterVolumeSpecName: "hostproc") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.542077 kubelet[2776]: I0129 12:12:58.536708 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.542077 kubelet[2776]: I0129 12:12:58.536733 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.542077 kubelet[2776]: I0129 12:12:58.537839 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.542077 kubelet[2776]: I0129 12:12:58.538140 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:12:58.542252 kubelet[2776]: I0129 12:12:58.540463 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3615d730-5b8f-42e9-808e-93298480ea8f-kube-api-access-6n8g8" (OuterVolumeSpecName: "kube-api-access-6n8g8") pod "3615d730-5b8f-42e9-808e-93298480ea8f" (UID: "3615d730-5b8f-42e9-808e-93298480ea8f"). InnerVolumeSpecName "kube-api-access-6n8g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:58.542252 kubelet[2776]: I0129 12:12:58.541651 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3615d730-5b8f-42e9-808e-93298480ea8f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3615d730-5b8f-42e9-808e-93298480ea8f" (UID: "3615d730-5b8f-42e9-808e-93298480ea8f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:12:58.542854 kubelet[2776]: I0129 12:12:58.542824 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:58.543218 kubelet[2776]: I0129 12:12:58.543133 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:12:58.543323 kubelet[2776]: I0129 12:12:58.543142 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:12:58.543389 kubelet[2776]: I0129 12:12:58.543048 2776 scope.go:117] "RemoveContainer" containerID="3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8" Jan 29 12:12:58.545194 kubelet[2776]: I0129 12:12:58.545064 2776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-kube-api-access-jm2fs" (OuterVolumeSpecName: "kube-api-access-jm2fs") pod "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" (UID: "35eaa9a9-6c2c-4b43-876b-984a07d9b4b4"). InnerVolumeSpecName "kube-api-access-jm2fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:58.546318 containerd[1474]: time="2025-01-29T12:12:58.546233472Z" level=info msg="RemoveContainer for \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\"" Jan 29 12:12:58.550681 containerd[1474]: time="2025-01-29T12:12:58.550638392Z" level=info msg="RemoveContainer for \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\" returns successfully" Jan 29 12:12:58.551285 kubelet[2776]: I0129 12:12:58.551105 2776 scope.go:117] "RemoveContainer" containerID="13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623" Jan 29 12:12:58.553434 containerd[1474]: time="2025-01-29T12:12:58.553313912Z" level=info msg="RemoveContainer for \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\"" Jan 29 12:12:58.556698 containerd[1474]: time="2025-01-29T12:12:58.556650912Z" level=info msg="RemoveContainer for \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\" returns successfully" Jan 29 12:12:58.556995 kubelet[2776]: I0129 12:12:58.556909 2776 scope.go:117] "RemoveContainer" containerID="e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da" Jan 29 12:12:58.558695 containerd[1474]: time="2025-01-29T12:12:58.558616112Z" level=info msg="RemoveContainer for \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\"" Jan 29 12:12:58.561766 containerd[1474]: time="2025-01-29T12:12:58.561697952Z" level=info msg="RemoveContainer for \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\" returns successfully" Jan 29 12:12:58.562341 kubelet[2776]: I0129 12:12:58.562015 2776 scope.go:117] "RemoveContainer" containerID="eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049" Jan 29 12:12:58.563898 containerd[1474]: time="2025-01-29T12:12:58.563524952Z" level=info msg="RemoveContainer for \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\"" Jan 29 12:12:58.566634 containerd[1474]: time="2025-01-29T12:12:58.566599192Z" level=info msg="RemoveContainer for \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\" returns successfully" Jan 29 12:12:58.567105 kubelet[2776]: I0129 12:12:58.567087 2776 scope.go:117] "RemoveContainer" containerID="98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d" Jan 29 12:12:58.567458 containerd[1474]: time="2025-01-29T12:12:58.567385872Z" level=error msg="ContainerStatus for \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\": not found" Jan 29 12:12:58.567671 kubelet[2776]: E0129 12:12:58.567645 2776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\": not found" containerID="98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d" Jan 29 12:12:58.567830 kubelet[2776]: I0129 12:12:58.567688 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d"} err="failed to get container status \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\": rpc error: code = NotFound desc = an error occurred when try to find container \"98ef251023e5ef9322089e6615a2d38571a4614e5da0417f792cdd9004a17a1d\": not found" Jan 29 12:12:58.567830 kubelet[2776]: I0129 12:12:58.567810 2776 scope.go:117] "RemoveContainer" containerID="3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8" Jan 29 12:12:58.568178 containerd[1474]: time="2025-01-29T12:12:58.568145192Z" level=error msg="ContainerStatus for \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\": not found" Jan 29 12:12:58.568340 kubelet[2776]: E0129 12:12:58.568318 2776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\": not found" containerID="3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8" Jan 29 12:12:58.568377 kubelet[2776]: I0129 12:12:58.568345 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8"} err="failed to get container status \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3f221dd1a0b16a9f23c7438ded227374839cb747a7ac10ffd61a3bb1b73c93d8\": not found" Jan 29 12:12:58.568377 kubelet[2776]: I0129 12:12:58.568375 2776 scope.go:117] "RemoveContainer" containerID="13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623" Jan 29 12:12:58.568674 containerd[1474]: time="2025-01-29T12:12:58.568646552Z" level=error msg="ContainerStatus for \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\": not found" Jan 29 12:12:58.568942 kubelet[2776]: E0129 12:12:58.568896 2776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\": not found" containerID="13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623" Jan 29 12:12:58.568980 kubelet[2776]: I0129 12:12:58.568918 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623"} err="failed to get container status \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\": rpc error: code = NotFound desc = an error occurred when try to find container \"13d0f8373c3e635360d6f5ac9e6e05314b8561aeac0a749a4f1a5dd1d8d40623\": not found" Jan 29 12:12:58.568980 kubelet[2776]: I0129 12:12:58.568957 2776 scope.go:117] "RemoveContainer" containerID="e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da" Jan 29 12:12:58.569296 containerd[1474]: time="2025-01-29T12:12:58.569219032Z" level=error msg="ContainerStatus for \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\": not found" Jan 29 12:12:58.569381 kubelet[2776]: E0129 12:12:58.569342 2776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\": not found" containerID="e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da" Jan 29 12:12:58.569381 kubelet[2776]: I0129 12:12:58.569359 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da"} err="failed to get container status \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\": rpc error: code = NotFound desc = an error occurred when try to find container \"e475a835086c511fa94fae7dedb482639dfe5e75883cde8ea216e2f9d8eb40da\": not found" Jan 29 12:12:58.569381 kubelet[2776]: I0129 12:12:58.569374 2776 scope.go:117] "RemoveContainer" containerID="eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049" Jan 29 12:12:58.569949 containerd[1474]: time="2025-01-29T12:12:58.569865312Z" level=error msg="ContainerStatus for \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\": not found" Jan 29 12:12:58.570041 kubelet[2776]: E0129 12:12:58.570008 2776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\": not found" containerID="eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049" Jan 29 12:12:58.570073 kubelet[2776]: I0129 12:12:58.570035 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049"} err="failed to get container status \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\": rpc error: code = NotFound desc = an error occurred when try to find container \"eb65c67737212520ed4e2234e257489eaa0b1f26e7f1c19d45433d38a4c43049\": not found" Jan 29 12:12:58.570073 kubelet[2776]: I0129 12:12:58.570050 2776 scope.go:117] "RemoveContainer" containerID="b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd" Jan 29 12:12:58.571309 containerd[1474]: time="2025-01-29T12:12:58.571275592Z" level=info msg="RemoveContainer for \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\"" Jan 29 12:12:58.574446 containerd[1474]: time="2025-01-29T12:12:58.574326352Z" level=info msg="RemoveContainer for \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\" returns successfully" Jan 29 12:12:58.574678 kubelet[2776]: I0129 12:12:58.574559 2776 scope.go:117] "RemoveContainer" containerID="b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd" Jan 29 12:12:58.574979 containerd[1474]: time="2025-01-29T12:12:58.574901832Z" level=error msg="ContainerStatus for \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\": not found" Jan 29 12:12:58.575061 kubelet[2776]: E0129 12:12:58.575031 2776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\": not found" containerID="b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd" Jan 29 12:12:58.575108 kubelet[2776]: I0129 12:12:58.575088 2776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd"} err="failed to get container status \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4c81eb2c65bd6599ed6d88af42ce2c1c57c875eee4f0460900a574433889bcd\": not found" Jan 29 12:12:58.619475 systemd[1]: Removed slice kubepods-besteffort-pod3615d730_5b8f_42e9_808e_93298480ea8f.slice - libcontainer container kubepods-besteffort-pod3615d730_5b8f_42e9_808e_93298480ea8f.slice. Jan 29 12:12:58.622587 systemd[1]: Removed slice kubepods-burstable-pod35eaa9a9_6c2c_4b43_876b_984a07d9b4b4.slice - libcontainer container kubepods-burstable-pod35eaa9a9_6c2c_4b43_876b_984a07d9b4b4.slice. Jan 29 12:12:58.622827 systemd[1]: kubepods-burstable-pod35eaa9a9_6c2c_4b43_876b_984a07d9b4b4.slice: Consumed 7.665s CPU time. Jan 29 12:12:58.633174 kubelet[2776]: I0129 12:12:58.633095 2776 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-run\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633174 kubelet[2776]: I0129 12:12:58.633145 2776 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-hubble-tls\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633174 kubelet[2776]: I0129 12:12:58.633159 2776 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-etc-cni-netd\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633174 kubelet[2776]: I0129 12:12:58.633171 2776 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-hostproc\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633174 kubelet[2776]: I0129 12:12:58.633186 2776 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jm2fs\" (UniqueName: \"kubernetes.io/projected/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-kube-api-access-jm2fs\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633606 kubelet[2776]: I0129 12:12:58.633201 2776 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-host-proc-sys-net\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633606 kubelet[2776]: I0129 12:12:58.633217 2776 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-host-proc-sys-kernel\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633606 kubelet[2776]: I0129 12:12:58.633230 2776 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-config-path\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633606 kubelet[2776]: I0129 12:12:58.633241 2776 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-bpf-maps\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633606 kubelet[2776]: I0129 12:12:58.633253 2776 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cni-path\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633606 kubelet[2776]: I0129 12:12:58.633266 2776 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3615d730-5b8f-42e9-808e-93298480ea8f-cilium-config-path\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633606 kubelet[2776]: I0129 12:12:58.633278 2776 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-lib-modules\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.633606 kubelet[2776]: I0129 12:12:58.633290 2776 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-cilium-cgroup\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.634027 kubelet[2776]: I0129 12:12:58.633302 2776 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6n8g8\" (UniqueName: \"kubernetes.io/projected/3615d730-5b8f-42e9-808e-93298480ea8f-kube-api-access-6n8g8\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.634027 kubelet[2776]: I0129 12:12:58.633313 2776 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-xtables-lock\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:58.634027 kubelet[2776]: I0129 12:12:58.633326 2776 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4-clustermesh-secrets\") on node \"ci-4081-3-0-2-f17d477515\" DevicePath \"\"" Jan 29 12:12:59.218953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c-rootfs.mount: Deactivated successfully. Jan 29 12:12:59.219112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac-rootfs.mount: Deactivated successfully. Jan 29 12:12:59.219199 systemd[1]: var-lib-kubelet-pods-3615d730\x2d5b8f\x2d42e9\x2d808e\x2d93298480ea8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6n8g8.mount: Deactivated successfully. Jan 29 12:12:59.219309 systemd[1]: var-lib-kubelet-pods-35eaa9a9\x2d6c2c\x2d4b43\x2d876b\x2d984a07d9b4b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djm2fs.mount: Deactivated successfully. Jan 29 12:12:59.219397 systemd[1]: var-lib-kubelet-pods-35eaa9a9\x2d6c2c\x2d4b43\x2d876b\x2d984a07d9b4b4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 12:12:59.219509 systemd[1]: var-lib-kubelet-pods-35eaa9a9\x2d6c2c\x2d4b43\x2d876b\x2d984a07d9b4b4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 12:13:00.313460 sshd[4357]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:00.317812 systemd[1]: sshd@20-159.69.53.160:22-139.178.89.65:49918.service: Deactivated successfully. Jan 29 12:13:00.321258 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:13:00.323124 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:13:00.325593 systemd-logind[1459]: Removed session 20. Jan 29 12:13:00.488791 systemd[1]: Started sshd@21-159.69.53.160:22-139.178.89.65:49924.service - OpenSSH per-connection server daemon (139.178.89.65:49924). Jan 29 12:13:00.612242 kubelet[2776]: I0129 12:13:00.612174 2776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" path="/var/lib/kubelet/pods/35eaa9a9-6c2c-4b43-876b-984a07d9b4b4/volumes" Jan 29 12:13:00.612940 kubelet[2776]: I0129 12:13:00.612908 2776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3615d730-5b8f-42e9-808e-93298480ea8f" path="/var/lib/kubelet/pods/3615d730-5b8f-42e9-808e-93298480ea8f/volumes" Jan 29 12:13:01.470240 sshd[4520]: Accepted publickey for core from 139.178.89.65 port 49924 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:13:01.472487 sshd[4520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:01.477537 systemd-logind[1459]: New session 21 of user core. Jan 29 12:13:01.487089 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:13:01.813570 kubelet[2776]: E0129 12:13:01.813262 2776 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:13:03.035097 kubelet[2776]: I0129 12:13:03.035011 2776 topology_manager.go:215] "Topology Admit Handler" podUID="aa31f469-0291-4e73-a9f3-28664b711f1a" podNamespace="kube-system" podName="cilium-lg5ld" Jan 29 12:13:03.035097 kubelet[2776]: E0129 12:13:03.035073 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" containerName="mount-bpf-fs" Jan 29 12:13:03.035097 kubelet[2776]: E0129 12:13:03.035082 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" containerName="cilium-agent" Jan 29 12:13:03.035097 kubelet[2776]: E0129 12:13:03.035088 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3615d730-5b8f-42e9-808e-93298480ea8f" containerName="cilium-operator" Jan 29 12:13:03.035097 kubelet[2776]: E0129 12:13:03.035095 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" containerName="mount-cgroup" Jan 29 12:13:03.035097 kubelet[2776]: E0129 12:13:03.035100 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" containerName="apply-sysctl-overwrites" Jan 29 12:13:03.035097 kubelet[2776]: E0129 12:13:03.035106 2776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" containerName="clean-cilium-state" Jan 29 12:13:03.036108 kubelet[2776]: I0129 12:13:03.035129 2776 memory_manager.go:354] "RemoveStaleState removing state" podUID="35eaa9a9-6c2c-4b43-876b-984a07d9b4b4" containerName="cilium-agent" Jan 29 12:13:03.036108 kubelet[2776]: I0129 12:13:03.035135 2776 memory_manager.go:354] "RemoveStaleState removing state" podUID="3615d730-5b8f-42e9-808e-93298480ea8f" containerName="cilium-operator" Jan 29 12:13:03.042900 systemd[1]: Created slice kubepods-burstable-podaa31f469_0291_4e73_a9f3_28664b711f1a.slice - libcontainer container kubepods-burstable-podaa31f469_0291_4e73_a9f3_28664b711f1a.slice. Jan 29 12:13:03.044264 kubelet[2776]: W0129 12:13:03.044056 2776 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-2-f17d477515" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-2-f17d477515' and this object Jan 29 12:13:03.044264 kubelet[2776]: E0129 12:13:03.044107 2776 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-2-f17d477515" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-2-f17d477515' and this object Jan 29 12:13:03.044264 kubelet[2776]: W0129 12:13:03.044167 2776 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-2-f17d477515" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-2-f17d477515' and this object Jan 29 12:13:03.044264 kubelet[2776]: E0129 12:13:03.044178 2776 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-2-f17d477515" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-2-f17d477515' and this object Jan 29 12:13:03.044264 kubelet[2776]: W0129 12:13:03.044177 2776 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-0-2-f17d477515" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-2-f17d477515' and this object Jan 29 12:13:03.044454 kubelet[2776]: E0129 12:13:03.044198 2776 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-0-2-f17d477515" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-2-f17d477515' and this object Jan 29 12:13:03.044454 kubelet[2776]: W0129 12:13:03.044227 2776 reflector.go:547] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081-3-0-2-f17d477515" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-2-f17d477515' and this object Jan 29 12:13:03.044454 kubelet[2776]: E0129 12:13:03.044248 2776 reflector.go:150] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4081-3-0-2-f17d477515" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-2-f17d477515' and this object Jan 29 12:13:03.162040 kubelet[2776]: I0129 12:13:03.161842 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-hostproc\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162040 kubelet[2776]: I0129 12:13:03.161919 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-etc-cni-netd\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162040 kubelet[2776]: I0129 12:13:03.161961 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aa31f469-0291-4e73-a9f3-28664b711f1a-cilium-config-path\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162040 kubelet[2776]: I0129 12:13:03.162007 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-cni-path\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162040 kubelet[2776]: I0129 12:13:03.162046 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-xtables-lock\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162512 kubelet[2776]: I0129 12:13:03.162080 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-cilium-run\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162512 kubelet[2776]: I0129 12:13:03.162110 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/aa31f469-0291-4e73-a9f3-28664b711f1a-hubble-tls\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162512 kubelet[2776]: I0129 12:13:03.162142 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-lib-modules\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162512 kubelet[2776]: I0129 12:13:03.162172 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/aa31f469-0291-4e73-a9f3-28664b711f1a-cilium-ipsec-secrets\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162512 kubelet[2776]: I0129 12:13:03.162203 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-host-proc-sys-net\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162512 kubelet[2776]: I0129 12:13:03.162238 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45cl\" (UniqueName: \"kubernetes.io/projected/aa31f469-0291-4e73-a9f3-28664b711f1a-kube-api-access-l45cl\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162856 kubelet[2776]: I0129 12:13:03.162270 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-cilium-cgroup\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162856 kubelet[2776]: I0129 12:13:03.162301 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-bpf-maps\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162856 kubelet[2776]: I0129 12:13:03.162333 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/aa31f469-0291-4e73-a9f3-28664b711f1a-clustermesh-secrets\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.162856 kubelet[2776]: I0129 12:13:03.162364 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/aa31f469-0291-4e73-a9f3-28664b711f1a-host-proc-sys-kernel\") pod \"cilium-lg5ld\" (UID: \"aa31f469-0291-4e73-a9f3-28664b711f1a\") " pod="kube-system/cilium-lg5ld" Jan 29 12:13:03.193853 sshd[4520]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:03.198474 systemd[1]: sshd@21-159.69.53.160:22-139.178.89.65:49924.service: Deactivated successfully. Jan 29 12:13:03.201482 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:13:03.202887 systemd-logind[1459]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:13:03.204239 systemd-logind[1459]: Removed session 21. Jan 29 12:13:03.366790 systemd[1]: Started sshd@22-159.69.53.160:22-139.178.89.65:59710.service - OpenSSH per-connection server daemon (139.178.89.65:59710). Jan 29 12:13:03.438202 kubelet[2776]: I0129 12:13:03.437726 2776 setters.go:580] "Node became not ready" node="ci-4081-3-0-2-f17d477515" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T12:13:03Z","lastTransitionTime":"2025-01-29T12:13:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 12:13:04.264997 kubelet[2776]: E0129 12:13:04.264923 2776 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 29 12:13:04.265565 kubelet[2776]: E0129 12:13:04.265070 2776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa31f469-0291-4e73-a9f3-28664b711f1a-clustermesh-secrets podName:aa31f469-0291-4e73-a9f3-28664b711f1a nodeName:}" failed. No retries permitted until 2025-01-29 12:13:04.765035963 +0000 UTC m=+348.249100972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/aa31f469-0291-4e73-a9f3-28664b711f1a-clustermesh-secrets") pod "cilium-lg5ld" (UID: "aa31f469-0291-4e73-a9f3-28664b711f1a") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:13:04.265565 kubelet[2776]: E0129 12:13:04.264909 2776 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jan 29 12:13:04.265565 kubelet[2776]: E0129 12:13:04.265132 2776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa31f469-0291-4e73-a9f3-28664b711f1a-cilium-ipsec-secrets podName:aa31f469-0291-4e73-a9f3-28664b711f1a nodeName:}" failed. No retries permitted until 2025-01-29 12:13:04.765118883 +0000 UTC m=+348.249183932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/aa31f469-0291-4e73-a9f3-28664b711f1a-cilium-ipsec-secrets") pod "cilium-lg5ld" (UID: "aa31f469-0291-4e73-a9f3-28664b711f1a") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:13:04.338028 sshd[4536]: Accepted publickey for core from 139.178.89.65 port 59710 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:13:04.340395 sshd[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:04.345592 systemd-logind[1459]: New session 22 of user core. Jan 29 12:13:04.349621 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:13:04.848515 containerd[1474]: time="2025-01-29T12:13:04.848398875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lg5ld,Uid:aa31f469-0291-4e73-a9f3-28664b711f1a,Namespace:kube-system,Attempt:0,}" Jan 29 12:13:04.874105 containerd[1474]: time="2025-01-29T12:13:04.873953555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:13:04.874232 containerd[1474]: time="2025-01-29T12:13:04.874145835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:13:04.874285 containerd[1474]: time="2025-01-29T12:13:04.874234155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:13:04.876134 containerd[1474]: time="2025-01-29T12:13:04.874575195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:13:04.898598 systemd[1]: Started cri-containerd-24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220.scope - libcontainer container 24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220. Jan 29 12:13:04.927523 containerd[1474]: time="2025-01-29T12:13:04.927457074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lg5ld,Uid:aa31f469-0291-4e73-a9f3-28664b711f1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\"" Jan 29 12:13:04.932706 containerd[1474]: time="2025-01-29T12:13:04.932651674Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:13:04.945176 containerd[1474]: time="2025-01-29T12:13:04.945099354Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8cdeb9a5cbf156364527f199cdfcba62b25cc081abd85a18acfa2e0bdb88c3f\"" Jan 29 12:13:04.946149 containerd[1474]: time="2025-01-29T12:13:04.946094794Z" level=info msg="StartContainer for \"d8cdeb9a5cbf156364527f199cdfcba62b25cc081abd85a18acfa2e0bdb88c3f\"" Jan 29 12:13:04.971049 systemd[1]: Started cri-containerd-d8cdeb9a5cbf156364527f199cdfcba62b25cc081abd85a18acfa2e0bdb88c3f.scope - libcontainer container d8cdeb9a5cbf156364527f199cdfcba62b25cc081abd85a18acfa2e0bdb88c3f. Jan 29 12:13:05.002787 containerd[1474]: time="2025-01-29T12:13:05.002644153Z" level=info msg="StartContainer for \"d8cdeb9a5cbf156364527f199cdfcba62b25cc081abd85a18acfa2e0bdb88c3f\" returns successfully" Jan 29 12:13:05.012759 systemd[1]: cri-containerd-d8cdeb9a5cbf156364527f199cdfcba62b25cc081abd85a18acfa2e0bdb88c3f.scope: Deactivated successfully. Jan 29 12:13:05.013696 sshd[4536]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:05.022136 systemd[1]: sshd@22-159.69.53.160:22-139.178.89.65:59710.service: Deactivated successfully. Jan 29 12:13:05.026100 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:13:05.028458 systemd-logind[1459]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:13:05.030263 systemd-logind[1459]: Removed session 22. Jan 29 12:13:05.055743 containerd[1474]: time="2025-01-29T12:13:05.055590312Z" level=info msg="shim disconnected" id=d8cdeb9a5cbf156364527f199cdfcba62b25cc081abd85a18acfa2e0bdb88c3f namespace=k8s.io Jan 29 12:13:05.055743 containerd[1474]: time="2025-01-29T12:13:05.055732952Z" level=warning msg="cleaning up after shim disconnected" id=d8cdeb9a5cbf156364527f199cdfcba62b25cc081abd85a18acfa2e0bdb88c3f namespace=k8s.io Jan 29 12:13:05.056101 containerd[1474]: time="2025-01-29T12:13:05.055753072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:13:05.194990 systemd[1]: Started sshd@23-159.69.53.160:22-139.178.89.65:59712.service - OpenSSH per-connection server daemon (139.178.89.65:59712). Jan 29 12:13:05.563178 containerd[1474]: time="2025-01-29T12:13:05.562830342Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:13:05.582469 containerd[1474]: time="2025-01-29T12:13:05.582064382Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d61414e9008c2f2664a8f68d679b43501e061b13d9210f1e4c0dc9484f30225d\"" Jan 29 12:13:05.583467 containerd[1474]: time="2025-01-29T12:13:05.583229342Z" level=info msg="StartContainer for \"d61414e9008c2f2664a8f68d679b43501e061b13d9210f1e4c0dc9484f30225d\"" Jan 29 12:13:05.623902 systemd[1]: Started cri-containerd-d61414e9008c2f2664a8f68d679b43501e061b13d9210f1e4c0dc9484f30225d.scope - libcontainer container d61414e9008c2f2664a8f68d679b43501e061b13d9210f1e4c0dc9484f30225d. Jan 29 12:13:05.665985 containerd[1474]: time="2025-01-29T12:13:05.665900541Z" level=info msg="StartContainer for \"d61414e9008c2f2664a8f68d679b43501e061b13d9210f1e4c0dc9484f30225d\" returns successfully" Jan 29 12:13:05.675799 systemd[1]: cri-containerd-d61414e9008c2f2664a8f68d679b43501e061b13d9210f1e4c0dc9484f30225d.scope: Deactivated successfully. Jan 29 12:13:05.704042 containerd[1474]: time="2025-01-29T12:13:05.703662340Z" level=info msg="shim disconnected" id=d61414e9008c2f2664a8f68d679b43501e061b13d9210f1e4c0dc9484f30225d namespace=k8s.io Jan 29 12:13:05.704042 containerd[1474]: time="2025-01-29T12:13:05.703777420Z" level=warning msg="cleaning up after shim disconnected" id=d61414e9008c2f2664a8f68d679b43501e061b13d9210f1e4c0dc9484f30225d namespace=k8s.io Jan 29 12:13:05.704042 containerd[1474]: time="2025-01-29T12:13:05.703797260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:13:06.205090 sshd[4651]: Accepted publickey for core from 139.178.89.65 port 59712 ssh2: RSA SHA256:7wq88Y6mZHPWeloslPJpjPR/GjZkKRbv3BUAF2pnzNA Jan 29 12:13:06.205973 sshd[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:13:06.214046 systemd-logind[1459]: New session 23 of user core. Jan 29 12:13:06.219621 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:13:06.566239 containerd[1474]: time="2025-01-29T12:13:06.566198241Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:13:06.585795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2013246698.mount: Deactivated successfully. Jan 29 12:13:06.589764 containerd[1474]: time="2025-01-29T12:13:06.587125521Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2\"" Jan 29 12:13:06.589764 containerd[1474]: time="2025-01-29T12:13:06.589432961Z" level=info msg="StartContainer for \"a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2\"" Jan 29 12:13:06.626643 systemd[1]: Started cri-containerd-a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2.scope - libcontainer container a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2. Jan 29 12:13:06.654315 containerd[1474]: time="2025-01-29T12:13:06.654273399Z" level=info msg="StartContainer for \"a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2\" returns successfully" Jan 29 12:13:06.657738 systemd[1]: cri-containerd-a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2.scope: Deactivated successfully. Jan 29 12:13:06.690364 containerd[1474]: time="2025-01-29T12:13:06.690302238Z" level=info msg="shim disconnected" id=a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2 namespace=k8s.io Jan 29 12:13:06.690364 containerd[1474]: time="2025-01-29T12:13:06.690357278Z" level=warning msg="cleaning up after shim disconnected" id=a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2 namespace=k8s.io Jan 29 12:13:06.690364 containerd[1474]: time="2025-01-29T12:13:06.690369598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:13:06.780458 systemd[1]: run-containerd-runc-k8s.io-a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2-runc.pFIXKK.mount: Deactivated successfully. Jan 29 12:13:06.780849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5921b8ddd5d0fd1b110a41ecab12130b35c3b460309f7dff1c1dbe03a68c2f2-rootfs.mount: Deactivated successfully. Jan 29 12:13:06.815061 kubelet[2776]: E0129 12:13:06.814991 2776 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:13:07.574983 containerd[1474]: time="2025-01-29T12:13:07.574922335Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:13:07.589781 containerd[1474]: time="2025-01-29T12:13:07.589716295Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956\"" Jan 29 12:13:07.591630 containerd[1474]: time="2025-01-29T12:13:07.591097935Z" level=info msg="StartContainer for \"2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956\"" Jan 29 12:13:07.607208 kubelet[2776]: E0129 12:13:07.607161 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-zfx25" podUID="f153bc98-7acd-402d-94c3-2a719309ddba" Jan 29 12:13:07.631612 systemd[1]: Started cri-containerd-2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956.scope - libcontainer container 2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956. Jan 29 12:13:07.654996 systemd[1]: cri-containerd-2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956.scope: Deactivated successfully. Jan 29 12:13:07.661444 containerd[1474]: time="2025-01-29T12:13:07.660745213Z" level=info msg="StartContainer for \"2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956\" returns successfully" Jan 29 12:13:07.662998 containerd[1474]: time="2025-01-29T12:13:07.662767053Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa31f469_0291_4e73_a9f3_28664b711f1a.slice/cri-containerd-2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956.scope/memory.events\": no such file or directory" Jan 29 12:13:07.683041 containerd[1474]: time="2025-01-29T12:13:07.682927452Z" level=info msg="shim disconnected" id=2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956 namespace=k8s.io Jan 29 12:13:07.683329 containerd[1474]: time="2025-01-29T12:13:07.683035892Z" level=warning msg="cleaning up after shim disconnected" id=2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956 namespace=k8s.io Jan 29 12:13:07.683329 containerd[1474]: time="2025-01-29T12:13:07.683074692Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:13:07.704397 containerd[1474]: time="2025-01-29T12:13:07.703808012Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:13:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:13:07.782256 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2077bf1f2e5a6eb52a94e4214b09b65c311c2c529d1324831a509055eaab0956-rootfs.mount: Deactivated successfully. Jan 29 12:13:08.586436 containerd[1474]: time="2025-01-29T12:13:08.585350584Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:13:08.608792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19518452.mount: Deactivated successfully. Jan 29 12:13:08.614495 containerd[1474]: time="2025-01-29T12:13:08.610059424Z" level=info msg="CreateContainer within sandbox \"24d933e6340ba6a850cf39ba08ce4d633123458a6e92b9214fea92f1a1cb6220\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9d4fc25fc15dc85ca2a8aac4ac103a5f5c867cca11f9b8a9383d40e8070813ff\"" Jan 29 12:13:08.614495 containerd[1474]: time="2025-01-29T12:13:08.611731944Z" level=info msg="StartContainer for \"9d4fc25fc15dc85ca2a8aac4ac103a5f5c867cca11f9b8a9383d40e8070813ff\"" Jan 29 12:13:08.642591 systemd[1]: Started cri-containerd-9d4fc25fc15dc85ca2a8aac4ac103a5f5c867cca11f9b8a9383d40e8070813ff.scope - libcontainer container 9d4fc25fc15dc85ca2a8aac4ac103a5f5c867cca11f9b8a9383d40e8070813ff. Jan 29 12:13:08.673895 containerd[1474]: time="2025-01-29T12:13:08.673447382Z" level=info msg="StartContainer for \"9d4fc25fc15dc85ca2a8aac4ac103a5f5c867cca11f9b8a9383d40e8070813ff\" returns successfully" Jan 29 12:13:08.996463 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 12:13:09.607049 kubelet[2776]: E0129 12:13:09.606893 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-zfx25" podUID="f153bc98-7acd-402d-94c3-2a719309ddba" Jan 29 12:13:11.071493 systemd[1]: run-containerd-runc-k8s.io-9d4fc25fc15dc85ca2a8aac4ac103a5f5c867cca11f9b8a9383d40e8070813ff-runc.5luP4I.mount: Deactivated successfully. Jan 29 12:13:11.607526 kubelet[2776]: E0129 12:13:11.606972 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-zfx25" podUID="f153bc98-7acd-402d-94c3-2a719309ddba" Jan 29 12:13:11.940117 systemd-networkd[1376]: lxc_health: Link UP Jan 29 12:13:11.957519 systemd-networkd[1376]: lxc_health: Gained carrier Jan 29 12:13:12.873863 kubelet[2776]: I0129 12:13:12.873597 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lg5ld" podStartSLOduration=9.873577523 podStartE2EDuration="9.873577523s" podCreationTimestamp="2025-01-29 12:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:13:09.609807108 +0000 UTC m=+353.093872157" watchObservedRunningTime="2025-01-29 12:13:12.873577523 +0000 UTC m=+356.357642532" Jan 29 12:13:13.237629 systemd[1]: run-containerd-runc-k8s.io-9d4fc25fc15dc85ca2a8aac4ac103a5f5c867cca11f9b8a9383d40e8070813ff-runc.NGeA8l.mount: Deactivated successfully. Jan 29 12:13:13.288605 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 29 12:13:16.649557 containerd[1474]: time="2025-01-29T12:13:16.649480954Z" level=info msg="StopPodSandbox for \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\"" Jan 29 12:13:16.649950 containerd[1474]: time="2025-01-29T12:13:16.649670518Z" level=info msg="TearDown network for sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" successfully" Jan 29 12:13:16.649950 containerd[1474]: time="2025-01-29T12:13:16.649701479Z" level=info msg="StopPodSandbox for \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" returns successfully" Jan 29 12:13:16.652258 containerd[1474]: time="2025-01-29T12:13:16.650751502Z" level=info msg="RemovePodSandbox for \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\"" Jan 29 12:13:16.652258 containerd[1474]: time="2025-01-29T12:13:16.650867185Z" level=info msg="Forcibly stopping sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\"" Jan 29 12:13:16.652258 containerd[1474]: time="2025-01-29T12:13:16.650946667Z" level=info msg="TearDown network for sandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" successfully" Jan 29 12:13:16.654401 containerd[1474]: time="2025-01-29T12:13:16.654269741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:13:16.654401 containerd[1474]: time="2025-01-29T12:13:16.654346262Z" level=info msg="RemovePodSandbox \"cfecd5e6bfcacb2a10be5cc9b34de680f293e7498f0972085dea2ba4a61d5cac\" returns successfully" Jan 29 12:13:16.655133 containerd[1474]: time="2025-01-29T12:13:16.654867034Z" level=info msg="StopPodSandbox for \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\"" Jan 29 12:13:16.655133 containerd[1474]: time="2025-01-29T12:13:16.654928275Z" level=info msg="TearDown network for sandbox \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\" successfully" Jan 29 12:13:16.655133 containerd[1474]: time="2025-01-29T12:13:16.654937915Z" level=info msg="StopPodSandbox for \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\" returns successfully" Jan 29 12:13:16.655729 containerd[1474]: time="2025-01-29T12:13:16.655552849Z" level=info msg="RemovePodSandbox for \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\"" Jan 29 12:13:16.655729 containerd[1474]: time="2025-01-29T12:13:16.655600210Z" level=info msg="Forcibly stopping sandbox \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\"" Jan 29 12:13:16.655729 containerd[1474]: time="2025-01-29T12:13:16.655645011Z" level=info msg="TearDown network for sandbox \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\" successfully" Jan 29 12:13:16.658441 containerd[1474]: time="2025-01-29T12:13:16.658341871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:13:16.658441 containerd[1474]: time="2025-01-29T12:13:16.658388032Z" level=info msg="RemovePodSandbox \"16df71066eb53a0c680c3e8fc1ac7871fad70cce6c8e63dad9e0ca776f325e8c\" returns successfully" Jan 29 12:13:17.755311 sshd[4651]: pam_unix(sshd:session): session closed for user core Jan 29 12:13:17.760184 systemd-logind[1459]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:13:17.761102 systemd[1]: sshd@23-159.69.53.160:22-139.178.89.65:59712.service: Deactivated successfully. Jan 29 12:13:17.763809 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:13:17.766621 systemd-logind[1459]: Removed session 23.