Jan 29 16:08:28.896573 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 16:08:28.896606 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025 Jan 29 16:08:28.896619 kernel: KASLR enabled Jan 29 16:08:28.896626 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 16:08:28.896632 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 29 16:08:28.896637 kernel: random: crng init done Jan 29 16:08:28.896644 kernel: secureboot: Secure boot disabled Jan 29 16:08:28.896650 kernel: ACPI: Early table checksum verification disabled Jan 29 16:08:28.896656 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 16:08:28.896664 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 16:08:28.896670 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896676 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896682 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896688 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896696 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896704 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896710 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896717 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896723 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:08:28.896729 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 16:08:28.896735 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 16:08:28.896741 kernel: NUMA: Failed to initialise from firmware Jan 29 16:08:28.896748 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 16:08:28.896754 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 16:08:28.896760 kernel: Zone ranges: Jan 29 16:08:28.896768 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 16:08:28.896774 kernel: DMA32 empty Jan 29 16:08:28.896781 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 16:08:28.896787 kernel: Movable zone start for each node Jan 29 16:08:28.896793 kernel: Early memory node ranges Jan 29 16:08:28.896799 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 29 16:08:28.896806 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 29 16:08:28.896812 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 29 16:08:28.896818 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 16:08:28.896824 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 16:08:28.896830 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 16:08:28.896836 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 16:08:28.896844 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 16:08:28.896850 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 16:08:28.896857 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 16:08:28.896866 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 16:08:28.896873 kernel: psci: probing for conduit method from ACPI. Jan 29 16:08:28.896880 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 16:08:28.896888 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 16:08:28.896894 kernel: psci: Trusted OS migration not required Jan 29 16:08:28.896901 kernel: psci: SMC Calling Convention v1.1 Jan 29 16:08:28.896907 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 16:08:28.896914 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 16:08:28.896921 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 16:08:28.896928 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 16:08:28.896934 kernel: Detected PIPT I-cache on CPU0 Jan 29 16:08:28.896941 kernel: CPU features: detected: GIC system register CPU interface Jan 29 16:08:28.896947 kernel: CPU features: detected: Hardware dirty bit management Jan 29 16:08:28.896955 kernel: CPU features: detected: Spectre-v4 Jan 29 16:08:28.896962 kernel: CPU features: detected: Spectre-BHB Jan 29 16:08:28.896969 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 16:08:28.896976 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 16:08:28.896983 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 16:08:28.896989 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 16:08:28.896996 kernel: alternatives: applying boot alternatives Jan 29 16:08:28.897004 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:08:28.897013 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:08:28.897020 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:08:28.897028 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:08:28.897038 kernel: Fallback order for Node 0: 0 Jan 29 16:08:28.897045 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 16:08:28.897052 kernel: Policy zone: Normal Jan 29 16:08:28.897058 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:08:28.897065 kernel: software IO TLB: area num 2. Jan 29 16:08:28.897072 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 16:08:28.897079 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Jan 29 16:08:28.897086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:08:28.897093 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:08:28.897100 kernel: rcu: RCU event tracing is enabled. Jan 29 16:08:28.897107 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:08:28.897114 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:08:28.897122 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:08:28.897129 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:08:28.897136 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:08:28.897143 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 16:08:28.897149 kernel: GICv3: 256 SPIs implemented Jan 29 16:08:28.897156 kernel: GICv3: 0 Extended SPIs implemented Jan 29 16:08:28.897162 kernel: Root IRQ handler: gic_handle_irq Jan 29 16:08:28.897169 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 16:08:28.897176 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 16:08:28.897183 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 16:08:28.897190 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 16:08:28.897198 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 16:08:28.897205 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 16:08:28.897212 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 16:08:28.897219 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:08:28.897225 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:08:28.897232 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 16:08:28.897238 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 16:08:28.897245 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 16:08:28.897252 kernel: Console: colour dummy device 80x25 Jan 29 16:08:28.897259 kernel: ACPI: Core revision 20230628 Jan 29 16:08:28.897266 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 16:08:28.897274 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:08:28.897281 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:08:28.897288 kernel: landlock: Up and running. Jan 29 16:08:28.897295 kernel: SELinux: Initializing. Jan 29 16:08:28.897302 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:08:28.897309 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:08:28.897315 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:08:28.897322 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:08:28.897329 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:08:28.897363 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:08:28.897370 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 16:08:28.897377 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 16:08:28.897384 kernel: Remapping and enabling EFI services. Jan 29 16:08:28.897391 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:08:28.897398 kernel: Detected PIPT I-cache on CPU1 Jan 29 16:08:28.897404 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 16:08:28.897411 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 16:08:28.897418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:08:28.897427 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 16:08:28.897434 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:08:28.897446 kernel: SMP: Total of 2 processors activated. Jan 29 16:08:28.897454 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 16:08:28.897462 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 16:08:28.897469 kernel: CPU features: detected: Common not Private translations Jan 29 16:08:28.897476 kernel: CPU features: detected: CRC32 instructions Jan 29 16:08:28.897484 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 16:08:28.897542 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 16:08:28.897556 kernel: CPU features: detected: LSE atomic instructions Jan 29 16:08:28.897564 kernel: CPU features: detected: Privileged Access Never Jan 29 16:08:28.897573 kernel: CPU features: detected: RAS Extension Support Jan 29 16:08:28.897580 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 16:08:28.897588 kernel: CPU: All CPU(s) started at EL1 Jan 29 16:08:28.897595 kernel: alternatives: applying system-wide alternatives Jan 29 16:08:28.897602 kernel: devtmpfs: initialized Jan 29 16:08:28.897610 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:08:28.897619 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:08:28.897627 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:08:28.897634 kernel: SMBIOS 3.0.0 present. Jan 29 16:08:28.897642 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 16:08:28.897649 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:08:28.897657 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 16:08:28.897664 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 16:08:28.897671 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 16:08:28.897680 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:08:28.897689 kernel: audit: type=2000 audit(0.010:1): state=initialized audit_enabled=0 res=1 Jan 29 16:08:28.897697 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:08:28.897704 kernel: cpuidle: using governor menu Jan 29 16:08:28.897711 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 16:08:28.897718 kernel: ASID allocator initialised with 32768 entries Jan 29 16:08:28.897725 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:08:28.897733 kernel: Serial: AMBA PL011 UART driver Jan 29 16:08:28.897740 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 16:08:28.897747 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 16:08:28.897756 kernel: Modules: 509280 pages in range for PLT usage Jan 29 16:08:28.897763 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:08:28.897770 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:08:28.897777 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 16:08:28.897784 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 16:08:28.897792 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:08:28.897799 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:08:28.897806 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 16:08:28.897813 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 16:08:28.897822 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:08:28.897829 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:08:28.897836 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:08:28.897843 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:08:28.897850 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:08:28.897857 kernel: ACPI: Interpreter enabled Jan 29 16:08:28.897864 kernel: ACPI: Using GIC for interrupt routing Jan 29 16:08:28.897871 kernel: ACPI: MCFG table detected, 1 entries Jan 29 16:08:28.897878 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 16:08:28.897887 kernel: printk: console [ttyAMA0] enabled Jan 29 16:08:28.897894 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:08:28.898071 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:08:28.898145 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 16:08:28.898213 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 16:08:28.898277 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 16:08:28.900015 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 16:08:28.900053 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 16:08:28.900061 kernel: PCI host bridge to bus 0000:00 Jan 29 16:08:28.900168 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 16:08:28.900231 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 16:08:28.900290 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 16:08:28.900369 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:08:28.900464 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 16:08:28.900573 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 16:08:28.900645 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 16:08:28.900713 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 16:08:28.900789 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.900855 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 16:08:28.900927 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.900996 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 16:08:28.901067 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.901131 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 16:08:28.901201 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.901266 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 16:08:28.902137 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.902257 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 16:08:28.902448 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.902553 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 16:08:28.902629 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.902693 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 16:08:28.902764 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.902826 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 16:08:28.902901 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:08:28.902964 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 16:08:28.903036 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 16:08:28.903099 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 16:08:28.903174 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:08:28.903239 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 16:08:28.903308 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 16:08:28.903394 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 16:08:28.903479 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 16:08:28.903563 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 16:08:28.903639 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 16:08:28.903707 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 16:08:28.903774 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 16:08:28.903854 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 16:08:28.903920 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 16:08:28.903994 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 16:08:28.904060 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 16:08:28.904127 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 16:08:28.904200 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 16:08:28.904271 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 16:08:28.905565 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 16:08:28.905680 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:08:28.905752 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 16:08:28.905820 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 16:08:28.905886 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 16:08:28.905962 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 16:08:28.906027 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 16:08:28.906091 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 16:08:28.906159 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 16:08:28.906224 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 16:08:28.906288 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 16:08:28.906573 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 16:08:28.906656 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 16:08:28.906719 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 16:08:28.906974 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 16:08:28.907042 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 16:08:28.907105 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 16:08:28.907172 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 16:08:28.907236 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 16:08:28.907299 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 16:08:28.907399 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 16:08:28.907468 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 16:08:28.907583 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 16:08:28.907658 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:08:28.907722 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 16:08:28.907786 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 16:08:28.907856 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:08:28.907932 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 16:08:28.907995 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 16:08:28.908062 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:08:28.908126 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 16:08:28.908189 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 16:08:28.908255 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 16:08:28.908320 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:08:28.908405 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 16:08:28.908475 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:08:28.908554 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 16:08:28.908620 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:08:28.908686 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 16:08:28.908750 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:08:28.908815 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 16:08:28.908883 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:08:28.908954 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 16:08:28.909018 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:08:28.909087 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 16:08:28.909151 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:08:28.909216 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 16:08:28.909279 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:08:28.909423 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 16:08:28.909507 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:08:28.909580 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 16:08:28.909646 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 16:08:28.909716 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 16:08:28.909785 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 16:08:28.909850 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 16:08:28.909912 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 16:08:28.909982 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 16:08:28.910045 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 16:08:28.910109 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 16:08:28.910172 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 16:08:28.910237 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 16:08:28.910300 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 16:08:28.910386 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 16:08:28.910452 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 16:08:28.910567 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 16:08:28.910637 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 16:08:28.910704 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 16:08:28.910768 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 16:08:28.910833 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 16:08:28.910896 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 16:08:28.910964 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 16:08:28.911038 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 16:08:28.911108 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 16:08:28.911182 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 16:08:28.911250 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:08:28.911314 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 16:08:28.912916 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 16:08:28.912997 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:08:28.913071 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 16:08:28.913142 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:08:28.913206 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 16:08:28.913269 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 16:08:28.913332 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:08:28.913441 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 16:08:28.913572 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 16:08:28.913645 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:08:28.914727 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 16:08:28.914810 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 16:08:28.914878 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:08:28.914955 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 16:08:28.915025 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:08:28.915096 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 16:08:28.915176 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 16:08:28.915243 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:08:28.915320 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 16:08:28.915424 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 16:08:28.915536 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:08:28.915616 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 16:08:28.915714 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 16:08:28.915784 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:08:28.915865 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 16:08:28.915936 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 16:08:28.916007 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:08:28.916073 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 16:08:28.916154 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 16:08:28.916223 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:08:28.916328 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 16:08:28.917715 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 16:08:28.917807 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 16:08:28.917885 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:08:28.917962 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 16:08:28.918035 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 16:08:28.918108 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:08:28.918186 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:08:28.918261 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 16:08:28.920451 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 16:08:28.920559 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:08:28.920631 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:08:28.920695 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 16:08:28.920766 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 16:08:28.920837 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:08:28.920916 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 16:08:28.920981 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 16:08:28.921057 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 16:08:28.921139 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 16:08:28.921199 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 16:08:28.921258 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:08:28.921324 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 16:08:28.921407 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 16:08:28.921472 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:08:28.921588 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 16:08:28.921654 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 16:08:28.921713 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:08:28.922472 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 16:08:28.922566 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 16:08:28.922628 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:08:28.922710 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 16:08:28.922769 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 16:08:28.922831 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:08:28.922897 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 16:08:28.922959 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 16:08:28.923019 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:08:28.923086 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 16:08:28.923145 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 16:08:28.923203 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:08:28.923270 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 16:08:28.923331 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 16:08:28.923417 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:08:28.923488 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 16:08:28.923597 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 16:08:28.923741 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:08:28.923757 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 16:08:28.923765 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 16:08:28.923773 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 16:08:28.923781 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 16:08:28.923793 kernel: iommu: Default domain type: Translated Jan 29 16:08:28.923801 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 16:08:28.923810 kernel: efivars: Registered efivars operations Jan 29 16:08:28.923817 kernel: vgaarb: loaded Jan 29 16:08:28.923825 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 16:08:28.923832 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:08:28.923840 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:08:28.923847 kernel: pnp: PnP ACPI init Jan 29 16:08:28.923939 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 16:08:28.923953 kernel: pnp: PnP ACPI: found 1 devices Jan 29 16:08:28.923961 kernel: NET: Registered PF_INET protocol family Jan 29 16:08:28.923969 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:08:28.923977 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:08:28.923984 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:08:28.923993 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:08:28.924000 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:08:28.924008 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:08:28.924017 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:08:28.924025 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:08:28.924033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:08:28.924109 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 16:08:28.924120 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:08:28.924128 kernel: kvm [1]: HYP mode not available Jan 29 16:08:28.924135 kernel: Initialise system trusted keyrings Jan 29 16:08:28.924143 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:08:28.924150 kernel: Key type asymmetric registered Jan 29 16:08:28.924159 kernel: Asymmetric key parser 'x509' registered Jan 29 16:08:28.924167 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 16:08:28.924174 kernel: io scheduler mq-deadline registered Jan 29 16:08:28.924182 kernel: io scheduler kyber registered Jan 29 16:08:28.924190 kernel: io scheduler bfq registered Jan 29 16:08:28.924198 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 16:08:28.924266 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 16:08:28.927408 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 16:08:28.927607 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.927685 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 16:08:28.927752 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 16:08:28.927817 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.927887 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 16:08:28.927953 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 16:08:28.928022 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.928090 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 16:08:28.928154 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 16:08:28.928218 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.928286 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 16:08:28.928369 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 16:08:28.928440 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.928533 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 16:08:28.928605 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 16:08:28.928671 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.928747 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 16:08:28.928813 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 16:08:28.928881 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.928949 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 16:08:28.929013 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 16:08:28.929077 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.929088 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 16:08:28.929154 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 16:08:28.929224 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 16:08:28.929287 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:08:28.929297 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 16:08:28.929305 kernel: ACPI: button: Power Button [PWRB] Jan 29 16:08:28.929313 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 16:08:28.930032 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 16:08:28.930120 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 16:08:28.930132 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:08:28.930146 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 16:08:28.930218 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 16:08:28.930228 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 16:08:28.930236 kernel: thunder_xcv, ver 1.0 Jan 29 16:08:28.930244 kernel: thunder_bgx, ver 1.0 Jan 29 16:08:28.930252 kernel: nicpf, ver 1.0 Jan 29 16:08:28.930260 kernel: nicvf, ver 1.0 Jan 29 16:08:28.930376 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 16:08:28.930449 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T16:08:28 UTC (1738166908) Jan 29 16:08:28.930463 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:08:28.930471 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 16:08:28.930478 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 16:08:28.930486 kernel: watchdog: Hard watchdog permanently disabled Jan 29 16:08:28.930532 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:08:28.930541 kernel: Segment Routing with IPv6 Jan 29 16:08:28.930549 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:08:28.930557 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:08:28.930567 kernel: Key type dns_resolver registered Jan 29 16:08:28.930575 kernel: registered taskstats version 1 Jan 29 16:08:28.930583 kernel: Loading compiled-in X.509 certificates Jan 29 16:08:28.930591 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b' Jan 29 16:08:28.930598 kernel: Key type .fscrypt registered Jan 29 16:08:28.930605 kernel: Key type fscrypt-provisioning registered Jan 29 16:08:28.930613 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:08:28.930620 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:08:28.930628 kernel: ima: No architecture policies found Jan 29 16:08:28.930637 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 16:08:28.930645 kernel: clk: Disabling unused clocks Jan 29 16:08:28.930653 kernel: Freeing unused kernel memory: 38336K Jan 29 16:08:28.930661 kernel: Run /init as init process Jan 29 16:08:28.930668 kernel: with arguments: Jan 29 16:08:28.930676 kernel: /init Jan 29 16:08:28.930683 kernel: with environment: Jan 29 16:08:28.930691 kernel: HOME=/ Jan 29 16:08:28.930698 kernel: TERM=linux Jan 29 16:08:28.930707 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:08:28.930716 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:08:28.930727 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:08:28.930735 systemd[1]: Detected virtualization kvm. Jan 29 16:08:28.930743 systemd[1]: Detected architecture arm64. Jan 29 16:08:28.930750 systemd[1]: Running in initrd. Jan 29 16:08:28.930758 systemd[1]: No hostname configured, using default hostname. Jan 29 16:08:28.930769 systemd[1]: Hostname set to . Jan 29 16:08:28.930778 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:08:28.930786 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:08:28.930794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:08:28.930803 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:08:28.930812 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:08:28.930820 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:08:28.930828 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:08:28.930839 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:08:28.930848 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:08:28.930856 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:08:28.930864 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:08:28.930872 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:08:28.930880 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:08:28.930888 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:08:28.930898 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:08:28.930906 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:08:28.930914 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:08:28.930922 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:08:28.930930 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:08:28.930938 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:08:28.930946 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:08:28.930955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:08:28.930963 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:08:28.930973 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:08:28.930981 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:08:28.930989 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:08:28.930997 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:08:28.931005 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:08:28.931013 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:08:28.931021 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:08:28.931063 systemd-journald[236]: Collecting audit messages is disabled. Jan 29 16:08:28.931086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:08:28.931094 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:08:28.931102 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:08:28.931112 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:08:28.931121 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:08:28.931129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:08:28.931137 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:08:28.931146 systemd-journald[236]: Journal started Jan 29 16:08:28.931167 systemd-journald[236]: Runtime Journal (/run/log/journal/dc3fd2f690ac41f3b8e2ab2ca97fcbe3) is 8M, max 76.6M, 68.6M free. Jan 29 16:08:28.916194 systemd-modules-load[237]: Inserted module 'overlay' Jan 29 16:08:28.933386 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:08:28.936210 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:08:28.941813 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:08:28.943730 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 29 16:08:28.945072 kernel: Bridge firewalling registered Jan 29 16:08:28.945298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:08:28.948928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:08:28.954250 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:08:28.966589 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:08:28.969680 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:08:28.970681 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:08:28.984975 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:08:28.991508 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:08:28.997217 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:08:29.006779 dracut-cmdline[270]: dracut-dracut-053 Jan 29 16:08:29.008592 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:08:29.012674 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:08:29.042841 systemd-resolved[278]: Positive Trust Anchors: Jan 29 16:08:29.042858 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:08:29.042890 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:08:29.053282 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 29 16:08:29.054601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:08:29.055222 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:08:29.111411 kernel: SCSI subsystem initialized Jan 29 16:08:29.115424 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:08:29.123398 kernel: iscsi: registered transport (tcp) Jan 29 16:08:29.136508 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:08:29.136616 kernel: QLogic iSCSI HBA Driver Jan 29 16:08:29.182957 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:08:29.188573 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:08:29.209367 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:08:29.209438 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:08:29.209460 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:08:29.261401 kernel: raid6: neonx8 gen() 15679 MB/s Jan 29 16:08:29.278387 kernel: raid6: neonx4 gen() 15670 MB/s Jan 29 16:08:29.295393 kernel: raid6: neonx2 gen() 13187 MB/s Jan 29 16:08:29.312397 kernel: raid6: neonx1 gen() 10350 MB/s Jan 29 16:08:29.329395 kernel: raid6: int64x8 gen() 6764 MB/s Jan 29 16:08:29.346470 kernel: raid6: int64x4 gen() 7303 MB/s Jan 29 16:08:29.363387 kernel: raid6: int64x2 gen() 6035 MB/s Jan 29 16:08:29.380392 kernel: raid6: int64x1 gen() 5027 MB/s Jan 29 16:08:29.380484 kernel: raid6: using algorithm neonx8 gen() 15679 MB/s Jan 29 16:08:29.397378 kernel: raid6: .... xor() 11865 MB/s, rmw enabled Jan 29 16:08:29.397470 kernel: raid6: using neon recovery algorithm Jan 29 16:08:29.402370 kernel: xor: measuring software checksum speed Jan 29 16:08:29.402442 kernel: 8regs : 11626 MB/sec Jan 29 16:08:29.402462 kernel: 32regs : 21710 MB/sec Jan 29 16:08:29.402479 kernel: arm64_neon : 25090 MB/sec Jan 29 16:08:29.403365 kernel: xor: using function: arm64_neon (25090 MB/sec) Jan 29 16:08:29.456420 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:08:29.473270 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:08:29.479578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:08:29.494786 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 29 16:08:29.499032 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:08:29.506719 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:08:29.521921 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 29 16:08:29.557813 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:08:29.564594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:08:29.614872 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:08:29.625702 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:08:29.649125 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:08:29.652945 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:08:29.655291 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:08:29.656478 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:08:29.666789 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:08:29.678812 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:08:29.743741 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:08:29.748701 kernel: ACPI: bus type USB registered Jan 29 16:08:29.748779 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:08:29.749710 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 16:08:29.756375 kernel: usbcore: registered new interface driver usbfs Jan 29 16:08:29.756429 kernel: usbcore: registered new interface driver hub Jan 29 16:08:29.756440 kernel: usbcore: registered new device driver usb Jan 29 16:08:29.763956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:08:29.764097 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:08:29.767033 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:08:29.768452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:08:29.769113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:08:29.771137 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:08:29.776659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:08:29.793384 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 16:08:29.797707 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 16:08:29.797858 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:08:29.797869 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:08:29.797391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:08:29.807628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:08:29.820442 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 16:08:29.829144 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 16:08:29.829267 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 16:08:29.829377 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 16:08:29.829465 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:08:29.829570 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:08:29.829581 kernel: GPT:17805311 != 80003071 Jan 29 16:08:29.829598 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:08:29.829607 kernel: GPT:17805311 != 80003071 Jan 29 16:08:29.829616 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:08:29.829625 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:08:29.829635 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 16:08:29.833069 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:08:29.842439 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:08:29.848147 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 16:08:29.848262 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 16:08:29.848368 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:08:29.848457 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 16:08:29.848579 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 16:08:29.848659 kernel: hub 1-0:1.0: USB hub found Jan 29 16:08:29.848756 kernel: hub 1-0:1.0: 4 ports detected Jan 29 16:08:29.848837 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 16:08:29.848926 kernel: hub 2-0:1.0: USB hub found Jan 29 16:08:29.849017 kernel: hub 2-0:1.0: 4 ports detected Jan 29 16:08:29.887020 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (518) Jan 29 16:08:29.889002 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (505) Jan 29 16:08:29.905733 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 16:08:29.914629 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 16:08:29.923385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:08:29.930572 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 16:08:29.931222 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 16:08:29.945774 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:08:29.956009 disk-uuid[576]: Primary Header is updated. Jan 29 16:08:29.956009 disk-uuid[576]: Secondary Entries is updated. Jan 29 16:08:29.956009 disk-uuid[576]: Secondary Header is updated. Jan 29 16:08:29.964376 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:08:29.969379 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:08:30.089641 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 16:08:30.335414 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 16:08:30.470007 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 16:08:30.470069 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 16:08:30.472396 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 16:08:30.525402 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 16:08:30.525726 kernel: usbcore: registered new interface driver usbhid Jan 29 16:08:30.526759 kernel: usbhid: USB HID core driver Jan 29 16:08:30.974376 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:08:30.975128 disk-uuid[577]: The operation has completed successfully. Jan 29 16:08:31.038815 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:08:31.038927 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:08:31.089869 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:08:31.093843 sh[592]: Success Jan 29 16:08:31.106403 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 16:08:31.162027 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:08:31.170575 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:08:31.171644 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:08:31.188616 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a Jan 29 16:08:31.188687 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:08:31.188705 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:08:31.188721 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:08:31.189386 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:08:31.195372 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:08:31.197880 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:08:31.199250 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:08:31.205639 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:08:31.210736 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:08:31.223098 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:08:31.223465 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:08:31.223533 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:08:31.226415 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:08:31.226491 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:08:31.239523 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:08:31.239604 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:08:31.246451 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:08:31.254164 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:08:31.341933 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:08:31.351900 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:08:31.352124 ignition[675]: Ignition 2.20.0 Jan 29 16:08:31.352130 ignition[675]: Stage: fetch-offline Jan 29 16:08:31.352172 ignition[675]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:08:31.352180 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:08:31.355508 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:08:31.352361 ignition[675]: parsed url from cmdline: "" Jan 29 16:08:31.352364 ignition[675]: no config URL provided Jan 29 16:08:31.352369 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:08:31.352377 ignition[675]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:08:31.352383 ignition[675]: failed to fetch config: resource requires networking Jan 29 16:08:31.352583 ignition[675]: Ignition finished successfully Jan 29 16:08:31.380535 systemd-networkd[779]: lo: Link UP Jan 29 16:08:31.380545 systemd-networkd[779]: lo: Gained carrier Jan 29 16:08:31.382263 systemd-networkd[779]: Enumeration completed Jan 29 16:08:31.382397 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:08:31.383200 systemd[1]: Reached target network.target - Network. Jan 29 16:08:31.384547 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:31.384550 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:08:31.385299 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:31.385302 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:08:31.385875 systemd-networkd[779]: eth0: Link UP Jan 29 16:08:31.385878 systemd-networkd[779]: eth0: Gained carrier Jan 29 16:08:31.385885 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:31.390919 systemd-networkd[779]: eth1: Link UP Jan 29 16:08:31.390922 systemd-networkd[779]: eth1: Gained carrier Jan 29 16:08:31.390932 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:31.393584 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:08:31.407133 ignition[784]: Ignition 2.20.0 Jan 29 16:08:31.407144 ignition[784]: Stage: fetch Jan 29 16:08:31.407330 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:08:31.407369 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:08:31.407494 ignition[784]: parsed url from cmdline: "" Jan 29 16:08:31.407498 ignition[784]: no config URL provided Jan 29 16:08:31.407504 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:08:31.407513 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:08:31.407605 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 16:08:31.408516 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 16:08:31.419467 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:08:31.442485 systemd-networkd[779]: eth0: DHCPv4 address 91.107.217.81/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:08:31.609072 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 16:08:31.618596 ignition[784]: GET result: OK Jan 29 16:08:31.618742 ignition[784]: parsing config with SHA512: 04b2b6ef28da2da89d472b089302e6faedeabc7d5289df05e3ec4bd18b3458e164221231d2399eea4174f51ef31950128feff647b5f32eb50d59eebb75d3c30a Jan 29 16:08:31.626699 unknown[784]: fetched base config from "system" Jan 29 16:08:31.626714 unknown[784]: fetched base config from "system" Jan 29 16:08:31.627394 ignition[784]: fetch: fetch complete Jan 29 16:08:31.626722 unknown[784]: fetched user config from "hetzner" Jan 29 16:08:31.627402 ignition[784]: fetch: fetch passed Jan 29 16:08:31.631772 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:08:31.627492 ignition[784]: Ignition finished successfully Jan 29 16:08:31.640713 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:08:31.655698 ignition[791]: Ignition 2.20.0 Jan 29 16:08:31.655709 ignition[791]: Stage: kargs Jan 29 16:08:31.655898 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:08:31.655908 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:08:31.656930 ignition[791]: kargs: kargs passed Jan 29 16:08:31.659070 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:08:31.656988 ignition[791]: Ignition finished successfully Jan 29 16:08:31.665686 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:08:31.679196 ignition[798]: Ignition 2.20.0 Jan 29 16:08:31.679209 ignition[798]: Stage: disks Jan 29 16:08:31.679430 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:08:31.679440 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:08:31.683081 ignition[798]: disks: disks passed Jan 29 16:08:31.683666 ignition[798]: Ignition finished successfully Jan 29 16:08:31.685301 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:08:31.688055 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:08:31.689423 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:08:31.690135 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:08:31.691197 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:08:31.692172 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:08:31.697574 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:08:31.716704 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:08:31.722292 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:08:32.195523 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:08:32.249367 kernel: EXT4-fs (sda9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none. Jan 29 16:08:32.250945 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:08:32.253226 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:08:32.265650 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:08:32.269429 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:08:32.272419 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:08:32.273083 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:08:32.273125 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:08:32.287376 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (815) Jan 29 16:08:32.288620 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:08:32.293733 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:08:32.293771 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:08:32.293784 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:08:32.293796 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:08:32.293808 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:08:32.303055 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:08:32.306490 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:08:32.350363 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:08:32.353995 coreos-metadata[817]: Jan 29 16:08:32.353 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 16:08:32.356091 coreos-metadata[817]: Jan 29 16:08:32.356 INFO Fetch successful Jan 29 16:08:32.356091 coreos-metadata[817]: Jan 29 16:08:32.356 INFO wrote hostname ci-4230-0-0-0-1a94fc8352 to /sysroot/etc/hostname Jan 29 16:08:32.359553 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:08:32.360268 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:08:32.367110 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:08:32.373572 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:08:32.472673 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:08:32.480544 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:08:32.483581 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:08:32.490358 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:08:32.518172 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:08:32.519083 ignition[934]: INFO : Ignition 2.20.0 Jan 29 16:08:32.519083 ignition[934]: INFO : Stage: mount Jan 29 16:08:32.521114 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:08:32.521114 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:08:32.522426 ignition[934]: INFO : mount: mount passed Jan 29 16:08:32.522426 ignition[934]: INFO : Ignition finished successfully Jan 29 16:08:32.523765 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:08:32.529523 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:08:32.748543 systemd-networkd[779]: eth1: Gained IPv6LL Jan 29 16:08:33.188830 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:08:33.197780 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:08:33.208389 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Jan 29 16:08:33.211598 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:08:33.211682 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:08:33.211700 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:08:33.215510 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:08:33.215578 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:08:33.217979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:08:33.237398 ignition[963]: INFO : Ignition 2.20.0 Jan 29 16:08:33.237398 ignition[963]: INFO : Stage: files Jan 29 16:08:33.238550 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:08:33.238550 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:08:33.240137 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:08:33.240137 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:08:33.240137 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:08:33.244889 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:08:33.244889 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:08:33.244889 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:08:33.243746 unknown[963]: wrote ssh authorized keys file for user: core Jan 29 16:08:33.247965 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 16:08:33.247965 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 16:08:33.324562 systemd-networkd[779]: eth0: Gained IPv6LL Jan 29 16:08:34.222667 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:08:35.229846 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 16:08:35.229846 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:08:35.229846 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 16:08:35.823216 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 16:08:35.938378 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 16:08:35.940281 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:08:35.940281 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:08:35.940281 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:08:35.940281 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:08:35.940281 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:08:35.940281 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:08:35.940281 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:08:35.940281 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:08:35.947643 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:08:35.947643 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:08:35.947643 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 16:08:35.947643 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 16:08:35.947643 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 16:08:35.947643 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 16:08:36.543611 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 16:08:37.665684 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 16:08:37.665684 ignition[963]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:08:37.667891 ignition[963]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:08:37.667891 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:08:37.667891 ignition[963]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:08:37.667891 ignition[963]: INFO : files: files passed Jan 29 16:08:37.667891 ignition[963]: INFO : Ignition finished successfully Jan 29 16:08:37.670374 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:08:37.676589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:08:37.681641 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:08:37.686156 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:08:37.686400 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:08:37.699531 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:08:37.699531 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:08:37.702919 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:08:37.706385 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:08:37.707453 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:08:37.713530 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:08:37.743920 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:08:37.744071 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:08:37.745949 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:08:37.746690 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:08:37.747754 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:08:37.753531 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:08:37.767533 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:08:37.774683 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:08:37.786231 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:08:37.787201 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:08:37.788963 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:08:37.791161 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:08:37.791300 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:08:37.794331 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:08:37.794947 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:08:37.797090 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:08:37.798090 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:08:37.799043 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:08:37.800091 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:08:37.801091 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:08:37.802217 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:08:37.803162 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:08:37.804193 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:08:37.805042 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:08:37.805190 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:08:37.806475 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:08:37.807492 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:08:37.808449 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:08:37.808536 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:08:37.809445 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:08:37.809572 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:08:37.810964 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:08:37.811080 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:08:37.812325 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:08:37.812479 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:08:37.813267 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:08:37.813384 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:08:37.822721 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:08:37.823472 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:08:37.823656 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:08:37.828586 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:08:37.829043 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:08:37.829157 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:08:37.830195 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:08:37.830292 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:08:37.843456 ignition[1014]: INFO : Ignition 2.20.0 Jan 29 16:08:37.843456 ignition[1014]: INFO : Stage: umount Jan 29 16:08:37.843456 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:08:37.843456 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:08:37.847227 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:08:37.848105 ignition[1014]: INFO : umount: umount passed Jan 29 16:08:37.848937 ignition[1014]: INFO : Ignition finished successfully Jan 29 16:08:37.849563 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:08:37.851810 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:08:37.851950 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:08:37.852744 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:08:37.852793 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:08:37.853443 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:08:37.853486 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:08:37.854264 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:08:37.854299 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:08:37.856007 systemd[1]: Stopped target network.target - Network. Jan 29 16:08:37.856756 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:08:37.856814 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:08:37.857867 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:08:37.860938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:08:37.864403 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:08:37.867477 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:08:37.868557 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:08:37.870546 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:08:37.870605 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:08:37.871329 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:08:37.871393 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:08:37.871995 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:08:37.872053 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:08:37.872964 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:08:37.873014 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:08:37.874289 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:08:37.875254 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:08:37.877771 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:08:37.878315 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:08:37.878461 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:08:37.880067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:08:37.880166 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:08:37.883783 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:08:37.883892 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:08:37.887262 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:08:37.888932 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:08:37.888996 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:08:37.891139 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:08:37.891597 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:08:37.891726 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:08:37.894072 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:08:37.894684 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:08:37.894945 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:08:37.902562 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:08:37.903749 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:08:37.903861 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:08:37.905524 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:08:37.905585 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:08:37.907345 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:08:37.907401 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:08:37.908034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:08:37.910452 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:08:37.923546 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:08:37.923910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:08:37.926669 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:08:37.926776 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:08:37.928239 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:08:37.928272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:08:37.929446 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:08:37.929504 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:08:37.931021 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:08:37.931070 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:08:37.932469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:08:37.932522 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:08:37.941654 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:08:37.942601 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:08:37.942689 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:08:37.945260 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:08:37.945322 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:08:37.946743 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:08:37.948510 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:08:37.952551 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:08:37.952678 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:08:37.954255 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:08:37.958591 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:08:37.969842 systemd[1]: Switching root. Jan 29 16:08:38.004313 systemd-journald[236]: Journal stopped Jan 29 16:08:38.946600 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 29 16:08:38.946665 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:08:38.946680 kernel: SELinux: policy capability open_perms=1 Jan 29 16:08:38.946693 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:08:38.946702 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:08:38.946712 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:08:38.946721 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:08:38.946734 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:08:38.946743 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:08:38.946752 kernel: audit: type=1403 audit(1738166918.153:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:08:38.946763 systemd[1]: Successfully loaded SELinux policy in 35.422ms. Jan 29 16:08:38.946785 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.254ms. Jan 29 16:08:38.946796 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:08:38.946808 systemd[1]: Detected virtualization kvm. Jan 29 16:08:38.946821 systemd[1]: Detected architecture arm64. Jan 29 16:08:38.946831 systemd[1]: Detected first boot. Jan 29 16:08:38.946843 systemd[1]: Hostname set to . Jan 29 16:08:38.946857 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:08:38.946870 zram_generator::config[1058]: No configuration found. Jan 29 16:08:38.946881 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:08:38.946890 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:08:38.946905 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:08:38.946915 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:08:38.946925 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:08:38.946937 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:08:38.946947 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:08:38.946957 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:08:38.946967 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:08:38.946977 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:08:38.946987 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:08:38.946996 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:08:38.947007 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:08:38.947018 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:08:38.947028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:08:38.947038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:08:38.947048 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:08:38.947058 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:08:38.947068 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:08:38.947078 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:08:38.947089 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 16:08:38.947101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:08:38.947111 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:08:38.947121 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:08:38.947131 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:08:38.947140 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:08:38.947151 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:08:38.947161 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:08:38.947171 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:08:38.947183 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:08:38.947193 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:08:38.947203 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:08:38.947213 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:08:38.947223 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:08:38.947237 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:08:38.947250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:08:38.947260 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:08:38.947270 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:08:38.947280 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:08:38.947290 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:08:38.947300 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:08:38.947312 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:08:38.947322 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:08:38.950457 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:08:38.950505 systemd[1]: Reached target machines.target - Containers. Jan 29 16:08:38.950517 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:08:38.950528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:38.950538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:08:38.950549 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:08:38.950559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:08:38.950570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:08:38.950580 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:08:38.950598 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:08:38.950609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:08:38.950619 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:08:38.950629 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:08:38.950639 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:08:38.950652 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:08:38.950662 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:08:38.950681 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:38.950694 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:08:38.950704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:08:38.950714 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:08:38.950725 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:08:38.950737 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:08:38.950747 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:08:38.950759 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:08:38.950769 systemd[1]: Stopped verity-setup.service. Jan 29 16:08:38.950812 systemd-journald[1126]: Collecting audit messages is disabled. Jan 29 16:08:38.950834 kernel: loop: module loaded Jan 29 16:08:38.950845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:08:38.950855 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:08:38.950867 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:08:38.950877 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:08:38.950887 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:08:38.950899 systemd-journald[1126]: Journal started Jan 29 16:08:38.950923 systemd-journald[1126]: Runtime Journal (/run/log/journal/dc3fd2f690ac41f3b8e2ab2ca97fcbe3) is 8M, max 76.6M, 68.6M free. Jan 29 16:08:38.700599 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:08:38.710181 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:08:38.710954 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:08:38.954988 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:08:38.955100 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:08:38.957531 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:08:38.959312 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:08:38.961941 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:08:38.965958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:08:38.966610 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:08:38.968578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:08:38.968753 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:08:38.969677 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:08:38.969865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:08:38.970877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:08:38.972866 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:08:38.974132 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:08:38.976353 kernel: fuse: init (API version 7.39) Jan 29 16:08:38.976465 kernel: ACPI: bus type drm_connector registered Jan 29 16:08:38.977997 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:08:38.979686 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:08:38.980788 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:08:38.980974 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:08:38.981889 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:08:38.982926 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:08:38.996054 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:08:39.002506 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:08:39.007537 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:08:39.010482 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:08:39.010532 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:08:39.012154 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:08:39.021663 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:08:39.029676 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:08:39.031668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:39.033124 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:08:39.037508 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:08:39.038150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:08:39.042613 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:08:39.044474 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:08:39.046150 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:08:39.049801 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:08:39.060979 systemd-journald[1126]: Time spent on flushing to /var/log/journal/dc3fd2f690ac41f3b8e2ab2ca97fcbe3 is 50.597ms for 1138 entries. Jan 29 16:08:39.060979 systemd-journald[1126]: System Journal (/var/log/journal/dc3fd2f690ac41f3b8e2ab2ca97fcbe3) is 8M, max 584.8M, 576.8M free. Jan 29 16:08:39.138792 systemd-journald[1126]: Received client request to flush runtime journal. Jan 29 16:08:39.138851 kernel: loop0: detected capacity change from 0 to 189592 Jan 29 16:08:39.138865 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:08:39.062587 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:08:39.066040 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:08:39.066986 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:08:39.068714 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:08:39.093407 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:08:39.094779 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:08:39.101125 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:08:39.110582 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:08:39.121559 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:08:39.142926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:08:39.145485 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:08:39.161166 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:08:39.170868 kernel: loop1: detected capacity change from 0 to 8 Jan 29 16:08:39.176844 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:08:39.187246 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:08:39.197576 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:08:39.201362 kernel: loop2: detected capacity change from 0 to 123192 Jan 29 16:08:39.231662 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jan 29 16:08:39.231679 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jan 29 16:08:39.236541 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:08:39.243380 kernel: loop3: detected capacity change from 0 to 113512 Jan 29 16:08:39.284802 kernel: loop4: detected capacity change from 0 to 189592 Jan 29 16:08:39.309392 kernel: loop5: detected capacity change from 0 to 8 Jan 29 16:08:39.311392 kernel: loop6: detected capacity change from 0 to 123192 Jan 29 16:08:39.322393 kernel: loop7: detected capacity change from 0 to 113512 Jan 29 16:08:39.334097 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 16:08:39.334987 (sd-merge)[1203]: Merged extensions into '/usr'. Jan 29 16:08:39.339176 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:08:39.339479 systemd[1]: Reloading... Jan 29 16:08:39.465425 zram_generator::config[1234]: No configuration found. Jan 29 16:08:39.630006 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:08:39.633217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:08:39.696762 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:08:39.697281 systemd[1]: Reloading finished in 357 ms. Jan 29 16:08:39.724017 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:08:39.725838 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:08:39.821679 systemd[1]: Starting ensure-sysext.service... Jan 29 16:08:39.836570 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:08:39.857414 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:08:39.858933 systemd[1]: Reload requested from client PID 1268 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:08:39.858956 systemd[1]: Reloading... Jan 29 16:08:39.867261 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:08:39.868135 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:08:39.868919 systemd-tmpfiles[1269]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:08:39.869139 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 29 16:08:39.869187 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 29 16:08:39.873557 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:08:39.873701 systemd-tmpfiles[1269]: Skipping /boot Jan 29 16:08:39.885749 systemd-tmpfiles[1269]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:08:39.885910 systemd-tmpfiles[1269]: Skipping /boot Jan 29 16:08:39.924374 zram_generator::config[1298]: No configuration found. Jan 29 16:08:40.032951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:08:40.096031 systemd[1]: Reloading finished in 236 ms. Jan 29 16:08:40.110544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:08:40.135665 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:08:40.138670 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:08:40.141749 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:08:40.146826 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:08:40.150012 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:08:40.159543 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:08:40.162489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:40.165696 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:08:40.170689 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:08:40.173819 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:08:40.174919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:40.175058 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:40.183255 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:08:40.189681 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:40.189857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:40.189940 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:40.195949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:40.200835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:08:40.201588 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:40.201759 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:40.206768 systemd[1]: Finished ensure-sysext.service. Jan 29 16:08:40.208953 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:08:40.218763 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:08:40.220406 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:08:40.227785 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:08:40.244288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:08:40.244530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:08:40.248087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:08:40.248279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:08:40.250142 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:08:40.252597 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:08:40.253747 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:08:40.253891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:08:40.256140 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:08:40.256224 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:08:40.265134 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:08:40.277376 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:08:40.278970 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:08:40.279912 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Jan 29 16:08:40.305779 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:08:40.311492 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:08:40.312320 augenrules[1381]: No rules Jan 29 16:08:40.329910 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:08:40.330803 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:08:40.331033 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:08:40.429918 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:08:40.431166 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:08:40.461837 systemd-networkd[1391]: lo: Link UP Jan 29 16:08:40.464431 systemd-networkd[1391]: lo: Gained carrier Jan 29 16:08:40.465607 systemd-networkd[1391]: Enumeration completed Jan 29 16:08:40.466177 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:08:40.474662 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:08:40.479618 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:08:40.481252 systemd-resolved[1340]: Positive Trust Anchors: Jan 29 16:08:40.481274 systemd-resolved[1340]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:08:40.481305 systemd-resolved[1340]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:08:40.489239 systemd-resolved[1340]: Using system hostname 'ci-4230-0-0-0-1a94fc8352'. Jan 29 16:08:40.490861 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:08:40.491866 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 16:08:40.491908 systemd[1]: Reached target network.target - Network. Jan 29 16:08:40.493642 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:08:40.506243 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:08:40.546156 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:40.546515 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:08:40.547527 systemd-networkd[1391]: eth0: Link UP Jan 29 16:08:40.548001 systemd-networkd[1391]: eth0: Gained carrier Jan 29 16:08:40.548080 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:40.555205 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:40.555215 systemd-networkd[1391]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:08:40.557268 systemd-networkd[1391]: eth1: Link UP Jan 29 16:08:40.557277 systemd-networkd[1391]: eth1: Gained carrier Jan 29 16:08:40.557295 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:08:40.575436 systemd-networkd[1391]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:08:40.576769 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:08:40.577311 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 16:08:40.599356 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1390) Jan 29 16:08:40.608649 systemd-networkd[1391]: eth0: DHCPv4 address 91.107.217.81/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:08:40.609731 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 16:08:40.609953 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 16:08:40.665063 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 16:08:40.665186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:08:40.672648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:08:40.676691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:08:40.682056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:08:40.682723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:08:40.682776 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:08:40.682799 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:08:40.683141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:08:40.683306 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:08:40.696201 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:08:40.703613 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:08:40.704707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:08:40.706829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:08:40.712650 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:08:40.712832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:08:40.717723 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:08:40.717781 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:08:40.723647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:08:40.729842 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 16:08:40.729909 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:08:40.729921 kernel: [drm] features: -context_init Jan 29 16:08:40.732766 kernel: [drm] number of scanouts: 1 Jan 29 16:08:40.732832 kernel: [drm] number of cap sets: 0 Jan 29 16:08:40.735377 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 16:08:40.747349 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:08:40.753376 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:08:40.770910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:08:40.836757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:08:40.899470 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:08:40.909764 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:08:40.923529 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:08:40.952887 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:08:40.955038 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:08:40.955855 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:08:40.956708 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:08:40.957880 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:08:40.959043 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:08:40.959947 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:08:40.960792 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:08:40.961468 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:08:40.961505 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:08:40.961937 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:08:40.963859 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:08:40.966050 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:08:40.969249 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:08:40.970166 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:08:40.970834 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:08:40.973450 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:08:40.975042 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:08:40.977238 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:08:40.978607 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:08:40.979273 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:08:40.979843 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:08:40.980367 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:08:40.980421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:08:40.983543 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:08:40.987214 lvm[1458]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:08:40.989628 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:08:40.993978 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:08:40.996038 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:08:40.999416 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:08:41.000257 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:08:41.002944 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:08:41.006115 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:08:41.021566 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 16:08:41.024219 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:08:41.031640 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:08:41.035700 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:08:41.038425 jq[1462]: false Jan 29 16:08:41.040459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:08:41.040996 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:08:41.041827 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:08:41.048525 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:08:41.051423 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:08:41.056512 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:08:41.056720 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:08:41.059868 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:08:41.060083 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:08:41.086928 dbus-daemon[1461]: [system] SELinux support is enabled Jan 29 16:08:41.091794 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:08:41.095772 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:08:41.095809 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:08:41.098196 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:08:41.098230 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:08:41.105046 update_engine[1472]: I20250129 16:08:41.104746 1472 main.cc:92] Flatcar Update Engine starting Jan 29 16:08:41.111848 update_engine[1472]: I20250129 16:08:41.107831 1472 update_check_scheduler.cc:74] Next update check in 6m37s Jan 29 16:08:41.111748 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:08:41.111965 jq[1474]: true Jan 29 16:08:41.111958 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:08:41.123361 extend-filesystems[1463]: Found loop4 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found loop5 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found loop6 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found loop7 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found sda Jan 29 16:08:41.123361 extend-filesystems[1463]: Found sda1 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found sda2 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found sda3 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found usr Jan 29 16:08:41.123361 extend-filesystems[1463]: Found sda4 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found sda6 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found sda7 Jan 29 16:08:41.123361 extend-filesystems[1463]: Found sda9 Jan 29 16:08:41.123361 extend-filesystems[1463]: Checking size of /dev/sda9 Jan 29 16:08:41.154987 coreos-metadata[1460]: Jan 29 16:08:41.126 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 16:08:41.154987 coreos-metadata[1460]: Jan 29 16:08:41.127 INFO Fetch successful Jan 29 16:08:41.154987 coreos-metadata[1460]: Jan 29 16:08:41.127 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 16:08:41.154987 coreos-metadata[1460]: Jan 29 16:08:41.129 INFO Fetch successful Jan 29 16:08:41.155211 tar[1477]: linux-arm64/helm Jan 29 16:08:41.131771 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:08:41.133247 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:08:41.144295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:08:41.179127 extend-filesystems[1463]: Resized partition /dev/sda9 Jan 29 16:08:41.187360 jq[1497]: true Jan 29 16:08:41.203926 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:08:41.212162 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 16:08:41.267575 systemd-logind[1471]: New seat seat0. Jan 29 16:08:41.283011 systemd-logind[1471]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 16:08:41.283042 systemd-logind[1471]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 16:08:41.283283 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:08:41.307903 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:08:41.310500 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:08:41.389454 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1402) Jan 29 16:08:41.389576 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:08:41.389869 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:08:41.401760 systemd[1]: Starting sshkeys.service... Jan 29 16:08:41.412910 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:08:41.424888 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 16:08:41.428405 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:08:41.450361 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:08:41.450361 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 16:08:41.450361 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 16:08:41.460749 extend-filesystems[1463]: Resized filesystem in /dev/sda9 Jan 29 16:08:41.460749 extend-filesystems[1463]: Found sr0 Jan 29 16:08:41.452118 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:08:41.452350 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:08:41.512965 containerd[1493]: time="2025-01-29T16:08:41.511016480Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:08:41.515601 coreos-metadata[1538]: Jan 29 16:08:41.515 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 16:08:41.520172 coreos-metadata[1538]: Jan 29 16:08:41.519 INFO Fetch successful Jan 29 16:08:41.521986 unknown[1538]: wrote ssh authorized keys file for user: core Jan 29 16:08:41.569283 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:08:41.571409 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:08:41.577381 systemd[1]: Finished sshkeys.service. Jan 29 16:08:41.609346 containerd[1493]: time="2025-01-29T16:08:41.606927080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:41.613776 containerd[1493]: time="2025-01-29T16:08:41.613715120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:41.613776 containerd[1493]: time="2025-01-29T16:08:41.613766280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:08:41.613898 containerd[1493]: time="2025-01-29T16:08:41.613790040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:08:41.613988 containerd[1493]: time="2025-01-29T16:08:41.613966520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:08:41.614013 containerd[1493]: time="2025-01-29T16:08:41.613989600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:41.614073 containerd[1493]: time="2025-01-29T16:08:41.614056440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:41.614101 containerd[1493]: time="2025-01-29T16:08:41.614071760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:41.614289 containerd[1493]: time="2025-01-29T16:08:41.614269120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:41.614314 containerd[1493]: time="2025-01-29T16:08:41.614288240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:41.614314 containerd[1493]: time="2025-01-29T16:08:41.614302920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:41.614411 containerd[1493]: time="2025-01-29T16:08:41.614312520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:41.616317 containerd[1493]: time="2025-01-29T16:08:41.616212880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:41.617421 containerd[1493]: time="2025-01-29T16:08:41.617357200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:08:41.617598 containerd[1493]: time="2025-01-29T16:08:41.617574320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:08:41.617598 containerd[1493]: time="2025-01-29T16:08:41.617595600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:08:41.617716 containerd[1493]: time="2025-01-29T16:08:41.617697600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:08:41.617770 containerd[1493]: time="2025-01-29T16:08:41.617755080Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:08:41.622146 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:08:41.626882 containerd[1493]: time="2025-01-29T16:08:41.626832280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:08:41.626970 containerd[1493]: time="2025-01-29T16:08:41.626905200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:08:41.626970 containerd[1493]: time="2025-01-29T16:08:41.626921440Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:08:41.626970 containerd[1493]: time="2025-01-29T16:08:41.626937880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:08:41.626970 containerd[1493]: time="2025-01-29T16:08:41.626953120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:08:41.627161 containerd[1493]: time="2025-01-29T16:08:41.627139040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:08:41.628411 containerd[1493]: time="2025-01-29T16:08:41.628352720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628617720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628645280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628662680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628676840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628689880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628704680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628719680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628735320Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628748040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628760240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628772480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628796160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628810480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629310 containerd[1493]: time="2025-01-29T16:08:41.628824280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628838320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628850840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628864040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628877280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628891400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628913120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628932160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628943960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628957120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628969920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.628984400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.629004840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.629018760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.629713 containerd[1493]: time="2025-01-29T16:08:41.629029640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:08:41.633372 containerd[1493]: time="2025-01-29T16:08:41.632856800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:08:41.633372 containerd[1493]: time="2025-01-29T16:08:41.632917280Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:08:41.633372 containerd[1493]: time="2025-01-29T16:08:41.632930960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:08:41.633372 containerd[1493]: time="2025-01-29T16:08:41.632946880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:08:41.633372 containerd[1493]: time="2025-01-29T16:08:41.632958280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.633372 containerd[1493]: time="2025-01-29T16:08:41.632972720Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:08:41.633372 containerd[1493]: time="2025-01-29T16:08:41.632987920Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:08:41.633372 containerd[1493]: time="2025-01-29T16:08:41.633021960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:08:41.637383 containerd[1493]: time="2025-01-29T16:08:41.634207040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:08:41.637383 containerd[1493]: time="2025-01-29T16:08:41.634277600Z" level=info msg="Connect containerd service" Jan 29 16:08:41.637383 containerd[1493]: time="2025-01-29T16:08:41.634369520Z" level=info msg="using legacy CRI server" Jan 29 16:08:41.637383 containerd[1493]: time="2025-01-29T16:08:41.634396880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:08:41.637383 containerd[1493]: time="2025-01-29T16:08:41.634777880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:08:41.639782 containerd[1493]: time="2025-01-29T16:08:41.639739800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:08:41.640298 containerd[1493]: time="2025-01-29T16:08:41.640223600Z" level=info msg="Start subscribing containerd event" Jan 29 16:08:41.640298 containerd[1493]: time="2025-01-29T16:08:41.640296640Z" level=info msg="Start recovering state" Jan 29 16:08:41.641062 containerd[1493]: time="2025-01-29T16:08:41.641035960Z" level=info msg="Start event monitor" Jan 29 16:08:41.641097 containerd[1493]: time="2025-01-29T16:08:41.641064520Z" level=info msg="Start snapshots syncer" Jan 29 16:08:41.641097 containerd[1493]: time="2025-01-29T16:08:41.641075800Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:08:41.641097 containerd[1493]: time="2025-01-29T16:08:41.641084680Z" level=info msg="Start streaming server" Jan 29 16:08:41.641479 containerd[1493]: time="2025-01-29T16:08:41.641454360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:08:41.643799 containerd[1493]: time="2025-01-29T16:08:41.643461560Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:08:41.643799 containerd[1493]: time="2025-01-29T16:08:41.643552480Z" level=info msg="containerd successfully booted in 0.134910s" Jan 29 16:08:41.643681 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:08:41.834589 tar[1477]: linux-arm64/LICENSE Jan 29 16:08:41.834701 tar[1477]: linux-arm64/README.md Jan 29 16:08:41.846628 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:08:41.964569 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 29 16:08:41.965749 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 16:08:41.968815 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:08:41.970489 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:08:41.981142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:41.983762 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:08:42.033040 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:08:42.284582 systemd-networkd[1391]: eth1: Gained IPv6LL Jan 29 16:08:42.285224 systemd-timesyncd[1356]: Network configuration changed, trying to establish connection. Jan 29 16:08:42.359569 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:08:42.380548 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:08:42.387957 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:08:42.396800 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:08:42.397043 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:08:42.403921 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:08:42.416588 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:08:42.427771 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:08:42.435756 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 16:08:42.437684 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:08:42.700679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:42.701847 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:08:42.704410 systemd[1]: Startup finished in 798ms (kernel) + 9.467s (initrd) + 4.586s (userspace) = 14.852s. Jan 29 16:08:42.716786 (kubelet)[1591]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:43.248272 kubelet[1591]: E0129 16:08:43.248196 1591 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:43.249922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:43.250070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:43.250611 systemd[1]: kubelet.service: Consumed 837ms CPU time, 234.5M memory peak. Jan 29 16:08:53.501466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:08:53.509703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:08:53.607512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:08:53.617188 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:08:53.668236 kubelet[1610]: E0129 16:08:53.668139 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:08:53.671229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:08:53.671494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:08:53.671978 systemd[1]: kubelet.service: Consumed 144ms CPU time, 94.1M memory peak. Jan 29 16:09:03.864553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:09:03.870707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:03.988439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:03.994110 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:09:04.040673 kubelet[1625]: E0129 16:09:04.040581 1625 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:09:04.043014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:09:04.043276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:09:04.043716 systemd[1]: kubelet.service: Consumed 146ms CPU time, 93.8M memory peak. Jan 29 16:09:12.485716 systemd-timesyncd[1356]: Contacted time server 94.130.23.46:123 (2.flatcar.pool.ntp.org). Jan 29 16:09:12.485840 systemd-timesyncd[1356]: Initial clock synchronization to Wed 2025-01-29 16:09:12.183827 UTC. Jan 29 16:09:14.114710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:09:14.124056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:14.229433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:14.234496 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:09:14.280824 kubelet[1641]: E0129 16:09:14.280761 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:09:14.283466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:09:14.283649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:09:14.284193 systemd[1]: kubelet.service: Consumed 142ms CPU time, 97.1M memory peak. Jan 29 16:09:24.364469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:09:24.381721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:24.505734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:24.505771 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:09:24.551051 kubelet[1656]: E0129 16:09:24.550998 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:09:24.554123 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:09:24.554295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:09:24.554897 systemd[1]: kubelet.service: Consumed 139ms CPU time, 94.2M memory peak. Jan 29 16:09:26.416062 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:09:26.427726 systemd[1]: Started sshd@0-91.107.217.81:22-195.3.147.83:29746.service - OpenSSH per-connection server daemon (195.3.147.83:29746). Jan 29 16:09:26.849700 sshd[1664]: Invalid user xd from 195.3.147.83 port 29746 Jan 29 16:09:26.863452 update_engine[1472]: I20250129 16:09:26.863389 1472 update_attempter.cc:509] Updating boot flags... Jan 29 16:09:26.914443 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1675) Jan 29 16:09:26.961681 sshd[1664]: Connection closed by invalid user xd 195.3.147.83 port 29746 [preauth] Jan 29 16:09:26.964886 systemd[1]: sshd@0-91.107.217.81:22-195.3.147.83:29746.service: Deactivated successfully. Jan 29 16:09:26.979353 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1678) Jan 29 16:09:28.835956 systemd[1]: Started sshd@1-91.107.217.81:22-139.178.68.195:55494.service - OpenSSH per-connection server daemon (139.178.68.195:55494). Jan 29 16:09:29.821312 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 55494 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:09:29.823888 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:09:29.836524 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:09:29.844178 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:09:29.849753 systemd-logind[1471]: New session 1 of user core. Jan 29 16:09:29.861154 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:09:29.869966 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:09:29.874878 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:09:29.878671 systemd-logind[1471]: New session c1 of user core. Jan 29 16:09:30.007753 systemd[1691]: Queued start job for default target default.target. Jan 29 16:09:30.017021 systemd[1691]: Created slice app.slice - User Application Slice. Jan 29 16:09:30.017150 systemd[1691]: Reached target paths.target - Paths. Jan 29 16:09:30.017227 systemd[1691]: Reached target timers.target - Timers. Jan 29 16:09:30.019616 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:09:30.033846 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:09:30.034084 systemd[1691]: Reached target sockets.target - Sockets. Jan 29 16:09:30.034179 systemd[1691]: Reached target basic.target - Basic System. Jan 29 16:09:30.034244 systemd[1691]: Reached target default.target - Main User Target. Jan 29 16:09:30.034290 systemd[1691]: Startup finished in 148ms. Jan 29 16:09:30.034928 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:09:30.046702 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:09:30.752418 systemd[1]: Started sshd@2-91.107.217.81:22-139.178.68.195:55508.service - OpenSSH per-connection server daemon (139.178.68.195:55508). Jan 29 16:09:31.736909 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 55508 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:09:31.739074 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:09:31.746509 systemd-logind[1471]: New session 2 of user core. Jan 29 16:09:31.751710 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:09:32.419079 sshd[1704]: Connection closed by 139.178.68.195 port 55508 Jan 29 16:09:32.418829 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Jan 29 16:09:32.424454 systemd[1]: sshd@2-91.107.217.81:22-139.178.68.195:55508.service: Deactivated successfully. Jan 29 16:09:32.426444 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:09:32.428732 systemd-logind[1471]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:09:32.429935 systemd-logind[1471]: Removed session 2. Jan 29 16:09:32.597851 systemd[1]: Started sshd@3-91.107.217.81:22-139.178.68.195:55514.service - OpenSSH per-connection server daemon (139.178.68.195:55514). Jan 29 16:09:33.585572 sshd[1710]: Accepted publickey for core from 139.178.68.195 port 55514 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:09:33.587834 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:09:33.594294 systemd-logind[1471]: New session 3 of user core. Jan 29 16:09:33.602667 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:09:34.266320 sshd[1712]: Connection closed by 139.178.68.195 port 55514 Jan 29 16:09:34.267629 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jan 29 16:09:34.274616 systemd[1]: sshd@3-91.107.217.81:22-139.178.68.195:55514.service: Deactivated successfully. Jan 29 16:09:34.278330 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:09:34.279449 systemd-logind[1471]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:09:34.280785 systemd-logind[1471]: Removed session 3. Jan 29 16:09:34.443999 systemd[1]: Started sshd@4-91.107.217.81:22-139.178.68.195:55524.service - OpenSSH per-connection server daemon (139.178.68.195:55524). Jan 29 16:09:34.614024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:09:34.631732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:34.734890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:34.739777 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:09:34.780132 kubelet[1728]: E0129 16:09:34.780013 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:09:34.783674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:09:34.783895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:09:34.786421 systemd[1]: kubelet.service: Consumed 137ms CPU time, 96.1M memory peak. Jan 29 16:09:35.432959 sshd[1718]: Accepted publickey for core from 139.178.68.195 port 55524 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:09:35.435200 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:09:35.443421 systemd-logind[1471]: New session 4 of user core. Jan 29 16:09:35.445536 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:09:36.112930 sshd[1735]: Connection closed by 139.178.68.195 port 55524 Jan 29 16:09:36.114196 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jan 29 16:09:36.121300 systemd[1]: sshd@4-91.107.217.81:22-139.178.68.195:55524.service: Deactivated successfully. Jan 29 16:09:36.124463 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:09:36.125577 systemd-logind[1471]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:09:36.126757 systemd-logind[1471]: Removed session 4. Jan 29 16:09:36.293918 systemd[1]: Started sshd@5-91.107.217.81:22-139.178.68.195:49704.service - OpenSSH per-connection server daemon (139.178.68.195:49704). Jan 29 16:09:37.277226 sshd[1741]: Accepted publickey for core from 139.178.68.195 port 49704 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:09:37.280278 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:09:37.288106 systemd-logind[1471]: New session 5 of user core. Jan 29 16:09:37.295054 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:09:37.810705 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:09:37.811007 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:09:37.825870 sudo[1744]: pam_unix(sudo:session): session closed for user root Jan 29 16:09:37.985429 sshd[1743]: Connection closed by 139.178.68.195 port 49704 Jan 29 16:09:37.986377 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jan 29 16:09:37.989894 systemd[1]: sshd@5-91.107.217.81:22-139.178.68.195:49704.service: Deactivated successfully. Jan 29 16:09:37.993080 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:09:37.994888 systemd-logind[1471]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:09:37.996037 systemd-logind[1471]: Removed session 5. Jan 29 16:09:38.173882 systemd[1]: Started sshd@6-91.107.217.81:22-139.178.68.195:49720.service - OpenSSH per-connection server daemon (139.178.68.195:49720). Jan 29 16:09:39.176432 sshd[1750]: Accepted publickey for core from 139.178.68.195 port 49720 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:09:39.178636 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:09:39.184772 systemd-logind[1471]: New session 6 of user core. Jan 29 16:09:39.187563 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:09:39.704477 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:09:39.704798 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:09:39.708811 sudo[1754]: pam_unix(sudo:session): session closed for user root Jan 29 16:09:39.713974 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:09:39.714254 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:09:39.727913 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:09:39.756435 augenrules[1776]: No rules Jan 29 16:09:39.757811 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:09:39.758049 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:09:39.759516 sudo[1753]: pam_unix(sudo:session): session closed for user root Jan 29 16:09:39.921299 sshd[1752]: Connection closed by 139.178.68.195 port 49720 Jan 29 16:09:39.922129 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Jan 29 16:09:39.926816 systemd[1]: sshd@6-91.107.217.81:22-139.178.68.195:49720.service: Deactivated successfully. Jan 29 16:09:39.929173 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:09:39.930279 systemd-logind[1471]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:09:39.931863 systemd-logind[1471]: Removed session 6. Jan 29 16:09:40.103787 systemd[1]: Started sshd@7-91.107.217.81:22-139.178.68.195:49730.service - OpenSSH per-connection server daemon (139.178.68.195:49730). Jan 29 16:09:41.094196 sshd[1785]: Accepted publickey for core from 139.178.68.195 port 49730 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:09:41.096183 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:09:41.103004 systemd-logind[1471]: New session 7 of user core. Jan 29 16:09:41.124724 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:09:41.622392 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:09:41.622673 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:09:41.949771 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:09:41.950915 (dockerd)[1805]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:09:42.190154 dockerd[1805]: time="2025-01-29T16:09:42.189765311Z" level=info msg="Starting up" Jan 29 16:09:42.275896 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport662232326-merged.mount: Deactivated successfully. Jan 29 16:09:42.297084 dockerd[1805]: time="2025-01-29T16:09:42.296614728Z" level=info msg="Loading containers: start." Jan 29 16:09:42.465420 kernel: Initializing XFRM netlink socket Jan 29 16:09:42.553564 systemd-networkd[1391]: docker0: Link UP Jan 29 16:09:42.586512 dockerd[1805]: time="2025-01-29T16:09:42.586456889Z" level=info msg="Loading containers: done." Jan 29 16:09:42.604252 dockerd[1805]: time="2025-01-29T16:09:42.603505022Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:09:42.604252 dockerd[1805]: time="2025-01-29T16:09:42.603656823Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:09:42.604252 dockerd[1805]: time="2025-01-29T16:09:42.603912502Z" level=info msg="Daemon has completed initialization" Jan 29 16:09:42.636266 dockerd[1805]: time="2025-01-29T16:09:42.636205065Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:09:42.636659 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:09:43.693384 containerd[1493]: time="2025-01-29T16:09:43.693289257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 16:09:44.336689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725454260.mount: Deactivated successfully. Jan 29 16:09:44.863899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:09:44.872814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:44.980115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:44.985256 (kubelet)[2050]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:09:45.025383 kubelet[2050]: E0129 16:09:45.024900 2050 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:09:45.027036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:09:45.027221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:09:45.027672 systemd[1]: kubelet.service: Consumed 138ms CPU time, 94.4M memory peak. Jan 29 16:09:46.087427 containerd[1493]: time="2025-01-29T16:09:46.086646642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:46.089042 containerd[1493]: time="2025-01-29T16:09:46.088536250Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618162" Jan 29 16:09:46.090608 containerd[1493]: time="2025-01-29T16:09:46.090531877Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:46.095021 containerd[1493]: time="2025-01-29T16:09:46.094926079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:46.098470 containerd[1493]: time="2025-01-29T16:09:46.098029618Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.404687824s" Jan 29 16:09:46.098470 containerd[1493]: time="2025-01-29T16:09:46.098116713Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 16:09:46.099218 containerd[1493]: time="2025-01-29T16:09:46.099171456Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 16:09:48.333722 containerd[1493]: time="2025-01-29T16:09:48.332745306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:48.334095 containerd[1493]: time="2025-01-29T16:09:48.332944176Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469487" Jan 29 16:09:48.334846 containerd[1493]: time="2025-01-29T16:09:48.334811866Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:48.338930 containerd[1493]: time="2025-01-29T16:09:48.338858253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:48.341007 containerd[1493]: time="2025-01-29T16:09:48.340867525Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.241534241s" Jan 29 16:09:48.341192 containerd[1493]: time="2025-01-29T16:09:48.341172612Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 16:09:48.341996 containerd[1493]: time="2025-01-29T16:09:48.341930569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 16:09:49.928418 containerd[1493]: time="2025-01-29T16:09:49.928310158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:49.930495 containerd[1493]: time="2025-01-29T16:09:49.929837742Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024237" Jan 29 16:09:49.931910 containerd[1493]: time="2025-01-29T16:09:49.931830114Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:49.935554 containerd[1493]: time="2025-01-29T16:09:49.935487611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:49.938044 containerd[1493]: time="2025-01-29T16:09:49.937981136Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.595838294s" Jan 29 16:09:49.938457 containerd[1493]: time="2025-01-29T16:09:49.938272659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 16:09:49.939740 containerd[1493]: time="2025-01-29T16:09:49.939466274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 16:09:51.320620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204466448.mount: Deactivated successfully. Jan 29 16:09:51.685957 containerd[1493]: time="2025-01-29T16:09:51.685874306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:51.687958 containerd[1493]: time="2025-01-29T16:09:51.687533963Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772143" Jan 29 16:09:51.689577 containerd[1493]: time="2025-01-29T16:09:51.689536026Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:51.693072 containerd[1493]: time="2025-01-29T16:09:51.693021604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:51.694013 containerd[1493]: time="2025-01-29T16:09:51.693970329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.754459968s" Jan 29 16:09:51.694013 containerd[1493]: time="2025-01-29T16:09:51.694011294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 16:09:51.694652 containerd[1493]: time="2025-01-29T16:09:51.694622734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:09:52.298185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1254276486.mount: Deactivated successfully. Jan 29 16:09:53.330406 containerd[1493]: time="2025-01-29T16:09:53.329739014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:53.332392 containerd[1493]: time="2025-01-29T16:09:53.332264912Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 29 16:09:53.334130 containerd[1493]: time="2025-01-29T16:09:53.334047482Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:53.337874 containerd[1493]: time="2025-01-29T16:09:53.337784603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:53.339326 containerd[1493]: time="2025-01-29T16:09:53.339179127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.644521428s" Jan 29 16:09:53.339326 containerd[1493]: time="2025-01-29T16:09:53.339222612Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 16:09:53.339946 containerd[1493]: time="2025-01-29T16:09:53.339901212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 16:09:53.858657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974541816.mount: Deactivated successfully. Jan 29 16:09:53.865682 containerd[1493]: time="2025-01-29T16:09:53.865606392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:53.868075 containerd[1493]: time="2025-01-29T16:09:53.867989553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 29 16:09:53.870844 containerd[1493]: time="2025-01-29T16:09:53.870718355Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:53.875700 containerd[1493]: time="2025-01-29T16:09:53.875618932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:53.878658 containerd[1493]: time="2025-01-29T16:09:53.878269045Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 538.227416ms" Jan 29 16:09:53.878658 containerd[1493]: time="2025-01-29T16:09:53.878332972Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 16:09:53.879835 containerd[1493]: time="2025-01-29T16:09:53.879761901Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 16:09:54.428890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354549569.mount: Deactivated successfully. Jan 29 16:09:55.115603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:09:55.125877 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:09:55.231557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:09:55.235930 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:09:55.284281 kubelet[2180]: E0129 16:09:55.284210 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:09:55.287054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:09:55.287224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:09:55.288049 systemd[1]: kubelet.service: Consumed 140ms CPU time, 96.7M memory peak. Jan 29 16:09:56.788457 containerd[1493]: time="2025-01-29T16:09:56.788326617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:56.790060 containerd[1493]: time="2025-01-29T16:09:56.789996825Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Jan 29 16:09:56.791617 containerd[1493]: time="2025-01-29T16:09:56.791544421Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:56.796019 containerd[1493]: time="2025-01-29T16:09:56.795936824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:09:56.797742 containerd[1493]: time="2025-01-29T16:09:56.797569428Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.91774292s" Jan 29 16:09:56.797742 containerd[1493]: time="2025-01-29T16:09:56.797613353Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 16:10:02.915838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:10:02.916023 systemd[1]: kubelet.service: Consumed 140ms CPU time, 96.7M memory peak. Jan 29 16:10:02.925936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:10:02.963984 systemd[1]: Reload requested from client PID 2220 ('systemctl') (unit session-7.scope)... Jan 29 16:10:02.964005 systemd[1]: Reloading... Jan 29 16:10:03.076403 zram_generator::config[2266]: No configuration found. Jan 29 16:10:03.193842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:10:03.287013 systemd[1]: Reloading finished in 322 ms. Jan 29 16:10:03.342842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:10:03.349319 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:10:03.350237 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:10:03.350518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:10:03.350571 systemd[1]: kubelet.service: Consumed 95ms CPU time, 82.2M memory peak. Jan 29 16:10:03.355952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:10:03.466958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:10:03.473915 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:10:03.513390 kubelet[2315]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:10:03.513390 kubelet[2315]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:10:03.513390 kubelet[2315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:10:03.513390 kubelet[2315]: I0129 16:10:03.512849 2315 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:10:04.497280 kubelet[2315]: I0129 16:10:04.497229 2315 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:10:04.498528 kubelet[2315]: I0129 16:10:04.497525 2315 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:10:04.498528 kubelet[2315]: I0129 16:10:04.497888 2315 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:10:04.527428 kubelet[2315]: E0129 16:10:04.525109 2315 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.107.217.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:04.527428 kubelet[2315]: I0129 16:10:04.526412 2315 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:10:04.538119 kubelet[2315]: E0129 16:10:04.538055 2315 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:10:04.538119 kubelet[2315]: I0129 16:10:04.538106 2315 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:10:04.544291 kubelet[2315]: I0129 16:10:04.544253 2315 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:10:04.545470 kubelet[2315]: I0129 16:10:04.545436 2315 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:10:04.545657 kubelet[2315]: I0129 16:10:04.545615 2315 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:10:04.545863 kubelet[2315]: I0129 16:10:04.545647 2315 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-0-1a94fc8352","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:10:04.545992 kubelet[2315]: I0129 16:10:04.545980 2315 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:10:04.545992 kubelet[2315]: I0129 16:10:04.545992 2315 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:10:04.546203 kubelet[2315]: I0129 16:10:04.546177 2315 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:10:04.548988 kubelet[2315]: I0129 16:10:04.548485 2315 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:10:04.548988 kubelet[2315]: I0129 16:10:04.548519 2315 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:10:04.548988 kubelet[2315]: I0129 16:10:04.548611 2315 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:10:04.548988 kubelet[2315]: I0129 16:10:04.548622 2315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:10:04.553639 kubelet[2315]: W0129 16:10:04.553582 2315 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.217.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-0-1a94fc8352&limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 16:10:04.553766 kubelet[2315]: E0129 16:10:04.553647 2315 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.107.217.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-0-1a94fc8352&limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:04.554746 kubelet[2315]: W0129 16:10:04.554005 2315 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.217.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 16:10:04.554746 kubelet[2315]: E0129 16:10:04.554065 2315 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.107.217.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:04.554746 kubelet[2315]: I0129 16:10:04.554499 2315 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:10:04.558353 kubelet[2315]: I0129 16:10:04.558293 2315 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:10:04.559967 kubelet[2315]: W0129 16:10:04.559923 2315 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:10:04.561048 kubelet[2315]: I0129 16:10:04.561023 2315 server.go:1269] "Started kubelet" Jan 29 16:10:04.562010 kubelet[2315]: I0129 16:10:04.561661 2315 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:10:04.563011 kubelet[2315]: I0129 16:10:04.562985 2315 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:10:04.564603 kubelet[2315]: I0129 16:10:04.563916 2315 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:10:04.564603 kubelet[2315]: I0129 16:10:04.564218 2315 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:10:04.565973 kubelet[2315]: I0129 16:10:04.565887 2315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:10:04.566266 kubelet[2315]: E0129 16:10:04.564379 2315 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.107.217.81:6443/api/v1/namespaces/default/events\": dial tcp 91.107.217.81:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-0-1a94fc8352.181f35ae7329b90b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-0-1a94fc8352,UID:ci-4230-0-0-0-1a94fc8352,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-0-1a94fc8352,},FirstTimestamp:2025-01-29 16:10:04.560996619 +0000 UTC m=+1.083459370,LastTimestamp:2025-01-29 16:10:04.560996619 +0000 UTC m=+1.083459370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-0-1a94fc8352,}" Jan 29 16:10:04.568400 kubelet[2315]: I0129 16:10:04.567294 2315 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:10:04.573978 kubelet[2315]: E0129 16:10:04.573605 2315 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:10:04.573978 kubelet[2315]: E0129 16:10:04.573933 2315 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-0-1a94fc8352\" not found" Jan 29 16:10:04.574126 kubelet[2315]: I0129 16:10:04.574057 2315 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:10:04.574454 kubelet[2315]: I0129 16:10:04.574332 2315 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:10:04.574454 kubelet[2315]: I0129 16:10:04.574440 2315 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:10:04.575320 kubelet[2315]: I0129 16:10:04.575278 2315 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:10:04.577110 kubelet[2315]: E0129 16:10:04.576718 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.217.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-0-1a94fc8352?timeout=10s\": dial tcp 91.107.217.81:6443: connect: connection refused" interval="200ms" Jan 29 16:10:04.577110 kubelet[2315]: W0129 16:10:04.576800 2315 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.217.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 16:10:04.577110 kubelet[2315]: E0129 16:10:04.576842 2315 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.107.217.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:04.579374 kubelet[2315]: I0129 16:10:04.578639 2315 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:10:04.579374 kubelet[2315]: I0129 16:10:04.578660 2315 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:10:04.594166 kubelet[2315]: I0129 16:10:04.594092 2315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:10:04.595341 kubelet[2315]: I0129 16:10:04.595291 2315 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:10:04.595341 kubelet[2315]: I0129 16:10:04.595327 2315 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:10:04.595464 kubelet[2315]: I0129 16:10:04.595368 2315 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:10:04.595464 kubelet[2315]: E0129 16:10:04.595437 2315 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:10:04.606213 kubelet[2315]: W0129 16:10:04.606173 2315 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.217.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 16:10:04.606777 kubelet[2315]: E0129 16:10:04.606221 2315 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.107.217.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:04.608949 kubelet[2315]: I0129 16:10:04.608924 2315 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:10:04.608949 kubelet[2315]: I0129 16:10:04.608944 2315 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:10:04.609080 kubelet[2315]: I0129 16:10:04.608964 2315 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:10:04.611071 kubelet[2315]: I0129 16:10:04.611033 2315 policy_none.go:49] "None policy: Start" Jan 29 16:10:04.612006 kubelet[2315]: I0129 16:10:04.611962 2315 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:10:04.612006 kubelet[2315]: I0129 16:10:04.611995 2315 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:10:04.618999 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:10:04.631042 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:10:04.635747 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:10:04.652147 kubelet[2315]: I0129 16:10:04.652094 2315 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:10:04.652674 kubelet[2315]: I0129 16:10:04.652461 2315 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:10:04.652674 kubelet[2315]: I0129 16:10:04.652491 2315 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:10:04.654661 kubelet[2315]: I0129 16:10:04.653259 2315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:10:04.657714 kubelet[2315]: E0129 16:10:04.657638 2315 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-0-1a94fc8352\" not found" Jan 29 16:10:04.711776 systemd[1]: Created slice kubepods-burstable-pod87bc5600ae7e68cbf76ecc535ad2a727.slice - libcontainer container kubepods-burstable-pod87bc5600ae7e68cbf76ecc535ad2a727.slice. Jan 29 16:10:04.729211 systemd[1]: Created slice kubepods-burstable-podb5c91365a935511c16ef4a2283064428.slice - libcontainer container kubepods-burstable-podb5c91365a935511c16ef4a2283064428.slice. Jan 29 16:10:04.752255 systemd[1]: Created slice kubepods-burstable-podc4a6d171ae69db727e94052249b8fc1e.slice - libcontainer container kubepods-burstable-podc4a6d171ae69db727e94052249b8fc1e.slice. Jan 29 16:10:04.756911 kubelet[2315]: I0129 16:10:04.756850 2315 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.757367 kubelet[2315]: E0129 16:10:04.757288 2315 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.107.217.81:6443/api/v1/nodes\": dial tcp 91.107.217.81:6443: connect: connection refused" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.778257 kubelet[2315]: E0129 16:10:04.778193 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.217.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-0-1a94fc8352?timeout=10s\": dial tcp 91.107.217.81:6443: connect: connection refused" interval="400ms" Jan 29 16:10:04.875610 kubelet[2315]: I0129 16:10:04.875431 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87bc5600ae7e68cbf76ecc535ad2a727-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-0-1a94fc8352\" (UID: \"87bc5600ae7e68cbf76ecc535ad2a727\") " pod="kube-system/kube-apiserver-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.875610 kubelet[2315]: I0129 16:10:04.875505 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.875610 kubelet[2315]: I0129 16:10:04.875535 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.875610 kubelet[2315]: I0129 16:10:04.875595 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.875863 kubelet[2315]: I0129 16:10:04.875644 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.875863 kubelet[2315]: I0129 16:10:04.875680 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.875863 kubelet[2315]: I0129 16:10:04.875715 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4a6d171ae69db727e94052249b8fc1e-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-0-1a94fc8352\" (UID: \"c4a6d171ae69db727e94052249b8fc1e\") " pod="kube-system/kube-scheduler-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.875863 kubelet[2315]: I0129 16:10:04.875746 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87bc5600ae7e68cbf76ecc535ad2a727-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-0-1a94fc8352\" (UID: \"87bc5600ae7e68cbf76ecc535ad2a727\") " pod="kube-system/kube-apiserver-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.875863 kubelet[2315]: I0129 16:10:04.875775 2315 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87bc5600ae7e68cbf76ecc535ad2a727-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-0-1a94fc8352\" (UID: \"87bc5600ae7e68cbf76ecc535ad2a727\") " pod="kube-system/kube-apiserver-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.960633 kubelet[2315]: I0129 16:10:04.960489 2315 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:04.961137 kubelet[2315]: E0129 16:10:04.960958 2315 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.107.217.81:6443/api/v1/nodes\": dial tcp 91.107.217.81:6443: connect: connection refused" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:05.030019 containerd[1493]: time="2025-01-29T16:10:05.029767375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-0-1a94fc8352,Uid:87bc5600ae7e68cbf76ecc535ad2a727,Namespace:kube-system,Attempt:0,}" Jan 29 16:10:05.050626 containerd[1493]: time="2025-01-29T16:10:05.050502290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-0-1a94fc8352,Uid:b5c91365a935511c16ef4a2283064428,Namespace:kube-system,Attempt:0,}" Jan 29 16:10:05.059908 containerd[1493]: time="2025-01-29T16:10:05.059545361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-0-1a94fc8352,Uid:c4a6d171ae69db727e94052249b8fc1e,Namespace:kube-system,Attempt:0,}" Jan 29 16:10:05.178964 kubelet[2315]: E0129 16:10:05.178889 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.217.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-0-1a94fc8352?timeout=10s\": dial tcp 91.107.217.81:6443: connect: connection refused" interval="800ms" Jan 29 16:10:05.364373 kubelet[2315]: I0129 16:10:05.364091 2315 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:05.365642 kubelet[2315]: E0129 16:10:05.365456 2315 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://91.107.217.81:6443/api/v1/nodes\": dial tcp 91.107.217.81:6443: connect: connection refused" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:05.454832 kubelet[2315]: W0129 16:10:05.454721 2315 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.107.217.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 16:10:05.454993 kubelet[2315]: E0129 16:10:05.454845 2315 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.107.217.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:05.570682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956199935.mount: Deactivated successfully. Jan 29 16:10:05.577943 containerd[1493]: time="2025-01-29T16:10:05.577872637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:10:05.579278 containerd[1493]: time="2025-01-29T16:10:05.579205524Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 16:10:05.581154 containerd[1493]: time="2025-01-29T16:10:05.581102928Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:10:05.582919 containerd[1493]: time="2025-01-29T16:10:05.582883445Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:10:05.585067 containerd[1493]: time="2025-01-29T16:10:05.585003983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:10:05.585941 kubelet[2315]: W0129 16:10:05.585764 2315 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.107.217.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-0-1a94fc8352&limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 16:10:05.585941 kubelet[2315]: E0129 16:10:05.585856 2315 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.107.217.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-0-1a94fc8352&limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:05.589009 containerd[1493]: time="2025-01-29T16:10:05.588952521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:10:05.590404 containerd[1493]: time="2025-01-29T16:10:05.590050033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.180291ms" Jan 29 16:10:05.590668 containerd[1493]: time="2025-01-29T16:10:05.590513663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:10:05.592608 containerd[1493]: time="2025-01-29T16:10:05.592546956Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:10:05.595156 containerd[1493]: time="2025-01-29T16:10:05.594994036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.353228ms" Jan 29 16:10:05.619347 containerd[1493]: time="2025-01-29T16:10:05.618820953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.150252ms" Jan 29 16:10:05.703438 containerd[1493]: time="2025-01-29T16:10:05.702445179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:10:05.703438 containerd[1493]: time="2025-01-29T16:10:05.702632111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:10:05.703438 containerd[1493]: time="2025-01-29T16:10:05.702645792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:05.703438 containerd[1493]: time="2025-01-29T16:10:05.702738678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:05.707264 containerd[1493]: time="2025-01-29T16:10:05.706949713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:10:05.707264 containerd[1493]: time="2025-01-29T16:10:05.707021838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:10:05.707264 containerd[1493]: time="2025-01-29T16:10:05.707048960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:05.707264 containerd[1493]: time="2025-01-29T16:10:05.707130085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:05.713404 containerd[1493]: time="2025-01-29T16:10:05.713101435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:10:05.713404 containerd[1493]: time="2025-01-29T16:10:05.713178800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:10:05.713404 containerd[1493]: time="2025-01-29T16:10:05.713201442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:05.714458 containerd[1493]: time="2025-01-29T16:10:05.714318595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:05.728664 systemd[1]: Started cri-containerd-70a02d11f79aae27aebf1510956b99844aa3f721f2abcb638740c8eaff30d365.scope - libcontainer container 70a02d11f79aae27aebf1510956b99844aa3f721f2abcb638740c8eaff30d365. Jan 29 16:10:05.745615 systemd[1]: Started cri-containerd-4d2f78fa65fcb1788b3320b10020e005418e9286f7a8d4d57a119692a05661c2.scope - libcontainer container 4d2f78fa65fcb1788b3320b10020e005418e9286f7a8d4d57a119692a05661c2. Jan 29 16:10:05.751794 systemd[1]: Started cri-containerd-53162de42d40e28bec820b3c293ff42a6daf3e87184cfb6bf3571e9a4ee70489.scope - libcontainer container 53162de42d40e28bec820b3c293ff42a6daf3e87184cfb6bf3571e9a4ee70489. Jan 29 16:10:05.798784 containerd[1493]: time="2025-01-29T16:10:05.798705070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-0-1a94fc8352,Uid:87bc5600ae7e68cbf76ecc535ad2a727,Namespace:kube-system,Attempt:0,} returns sandbox id \"70a02d11f79aae27aebf1510956b99844aa3f721f2abcb638740c8eaff30d365\"" Jan 29 16:10:05.804819 containerd[1493]: time="2025-01-29T16:10:05.804765546Z" level=info msg="CreateContainer within sandbox \"70a02d11f79aae27aebf1510956b99844aa3f721f2abcb638740c8eaff30d365\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:10:05.813446 containerd[1493]: time="2025-01-29T16:10:05.813280582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-0-1a94fc8352,Uid:b5c91365a935511c16ef4a2283064428,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d2f78fa65fcb1788b3320b10020e005418e9286f7a8d4d57a119692a05661c2\"" Jan 29 16:10:05.817131 containerd[1493]: time="2025-01-29T16:10:05.816892338Z" level=info msg="CreateContainer within sandbox \"4d2f78fa65fcb1788b3320b10020e005418e9286f7a8d4d57a119692a05661c2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:10:05.817753 containerd[1493]: time="2025-01-29T16:10:05.817661549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-0-1a94fc8352,Uid:c4a6d171ae69db727e94052249b8fc1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"53162de42d40e28bec820b3c293ff42a6daf3e87184cfb6bf3571e9a4ee70489\"" Jan 29 16:10:05.821164 containerd[1493]: time="2025-01-29T16:10:05.821104214Z" level=info msg="CreateContainer within sandbox \"53162de42d40e28bec820b3c293ff42a6daf3e87184cfb6bf3571e9a4ee70489\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:10:05.830203 containerd[1493]: time="2025-01-29T16:10:05.830146125Z" level=info msg="CreateContainer within sandbox \"70a02d11f79aae27aebf1510956b99844aa3f721f2abcb638740c8eaff30d365\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8bf08f717e3e1f0b1a3635223446392280d96956dede6f7551365504858a0336\"" Jan 29 16:10:05.831077 containerd[1493]: time="2025-01-29T16:10:05.830968698Z" level=info msg="StartContainer for \"8bf08f717e3e1f0b1a3635223446392280d96956dede6f7551365504858a0336\"" Jan 29 16:10:05.841406 containerd[1493]: time="2025-01-29T16:10:05.841360218Z" level=info msg="CreateContainer within sandbox \"53162de42d40e28bec820b3c293ff42a6daf3e87184cfb6bf3571e9a4ee70489\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f\"" Jan 29 16:10:05.842374 containerd[1493]: time="2025-01-29T16:10:05.841927495Z" level=info msg="StartContainer for \"f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f\"" Jan 29 16:10:05.845126 containerd[1493]: time="2025-01-29T16:10:05.845093702Z" level=info msg="CreateContainer within sandbox \"4d2f78fa65fcb1788b3320b10020e005418e9286f7a8d4d57a119692a05661c2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99\"" Jan 29 16:10:05.846200 containerd[1493]: time="2025-01-29T16:10:05.846114728Z" level=info msg="StartContainer for \"23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99\"" Jan 29 16:10:05.875770 systemd[1]: Started cri-containerd-8bf08f717e3e1f0b1a3635223446392280d96956dede6f7551365504858a0336.scope - libcontainer container 8bf08f717e3e1f0b1a3635223446392280d96956dede6f7551365504858a0336. Jan 29 16:10:05.884806 systemd[1]: Started cri-containerd-f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f.scope - libcontainer container f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f. Jan 29 16:10:05.894885 systemd[1]: Started cri-containerd-23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99.scope - libcontainer container 23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99. Jan 29 16:10:05.944372 containerd[1493]: time="2025-01-29T16:10:05.943320961Z" level=info msg="StartContainer for \"8bf08f717e3e1f0b1a3635223446392280d96956dede6f7551365504858a0336\" returns successfully" Jan 29 16:10:05.970320 containerd[1493]: time="2025-01-29T16:10:05.969447069Z" level=info msg="StartContainer for \"23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99\" returns successfully" Jan 29 16:10:05.970485 kubelet[2315]: W0129 16:10:05.969753 2315 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.107.217.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 16:10:05.970485 kubelet[2315]: E0129 16:10:05.969832 2315 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.107.217.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:05.980821 kubelet[2315]: E0129 16:10:05.980763 2315 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.107.217.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-0-1a94fc8352?timeout=10s\": dial tcp 91.107.217.81:6443: connect: connection refused" interval="1.6s" Jan 29 16:10:05.982720 containerd[1493]: time="2025-01-29T16:10:05.982675053Z" level=info msg="StartContainer for \"f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f\" returns successfully" Jan 29 16:10:05.991265 kubelet[2315]: W0129 16:10:05.991194 2315 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.107.217.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.107.217.81:6443: connect: connection refused Jan 29 16:10:05.991427 kubelet[2315]: E0129 16:10:05.991271 2315 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.107.217.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.107.217.81:6443: connect: connection refused" logger="UnhandledError" Jan 29 16:10:06.168287 kubelet[2315]: I0129 16:10:06.168153 2315 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:08.144795 kubelet[2315]: E0129 16:10:08.144750 2315 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-0-0-1a94fc8352\" not found" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:08.189758 kubelet[2315]: I0129 16:10:08.189701 2315 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:08.557329 kubelet[2315]: I0129 16:10:08.556867 2315 apiserver.go:52] "Watching apiserver" Jan 29 16:10:08.575253 kubelet[2315]: I0129 16:10:08.575161 2315 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:10:10.428799 systemd[1]: Reload requested from client PID 2598 ('systemctl') (unit session-7.scope)... Jan 29 16:10:10.428817 systemd[1]: Reloading... Jan 29 16:10:10.547392 zram_generator::config[2652]: No configuration found. Jan 29 16:10:10.634058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:10:10.740031 systemd[1]: Reloading finished in 310 ms. Jan 29 16:10:10.769320 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:10:10.783226 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:10:10.783784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:10:10.783883 systemd[1]: kubelet.service: Consumed 1.531s CPU time, 115.1M memory peak. Jan 29 16:10:10.790574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:10:10.897398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:10:10.908813 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:10:10.956368 kubelet[2688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:10:10.956368 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:10:10.956368 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:10:10.956368 kubelet[2688]: I0129 16:10:10.955148 2688 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:10:10.969239 kubelet[2688]: I0129 16:10:10.969174 2688 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 16:10:10.969239 kubelet[2688]: I0129 16:10:10.969205 2688 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:10:10.969498 kubelet[2688]: I0129 16:10:10.969461 2688 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 16:10:10.971160 kubelet[2688]: I0129 16:10:10.971128 2688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:10:10.973893 kubelet[2688]: I0129 16:10:10.973730 2688 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:10:10.976928 kubelet[2688]: E0129 16:10:10.976905 2688 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 16:10:10.977601 kubelet[2688]: I0129 16:10:10.977012 2688 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 16:10:10.979156 kubelet[2688]: I0129 16:10:10.979129 2688 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:10:10.979249 kubelet[2688]: I0129 16:10:10.979235 2688 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 16:10:10.979375 kubelet[2688]: I0129 16:10:10.979326 2688 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:10:10.979544 kubelet[2688]: I0129 16:10:10.979366 2688 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-0-1a94fc8352","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 16:10:10.979651 kubelet[2688]: I0129 16:10:10.979553 2688 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:10:10.979651 kubelet[2688]: I0129 16:10:10.979562 2688 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 16:10:10.979651 kubelet[2688]: I0129 16:10:10.979589 2688 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:10:10.981506 kubelet[2688]: I0129 16:10:10.979682 2688 kubelet.go:408] "Attempting to sync node with API server" Jan 29 16:10:10.981506 kubelet[2688]: I0129 16:10:10.979696 2688 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:10:10.981506 kubelet[2688]: I0129 16:10:10.979714 2688 kubelet.go:314] "Adding apiserver pod source" Jan 29 16:10:10.981506 kubelet[2688]: I0129 16:10:10.979723 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:10:10.986635 kubelet[2688]: I0129 16:10:10.986612 2688 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:10:10.987439 kubelet[2688]: I0129 16:10:10.987422 2688 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:10:10.988838 kubelet[2688]: I0129 16:10:10.988818 2688 server.go:1269] "Started kubelet" Jan 29 16:10:10.992311 kubelet[2688]: I0129 16:10:10.992239 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:10:11.008387 kubelet[2688]: I0129 16:10:11.007418 2688 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:10:11.011358 kubelet[2688]: I0129 16:10:11.009472 2688 server.go:460] "Adding debug handlers to kubelet server" Jan 29 16:10:11.012637 kubelet[2688]: I0129 16:10:11.012582 2688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:10:11.012913 kubelet[2688]: I0129 16:10:11.012898 2688 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:10:11.013005 kubelet[2688]: I0129 16:10:10.995991 2688 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 16:10:11.013251 kubelet[2688]: I0129 16:10:11.013208 2688 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 16:10:11.014736 kubelet[2688]: E0129 16:10:11.014714 2688 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-0-0-1a94fc8352\" not found" Jan 29 16:10:11.017896 kubelet[2688]: I0129 16:10:11.017820 2688 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 16:10:11.018148 kubelet[2688]: I0129 16:10:11.018132 2688 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:10:11.021662 kubelet[2688]: I0129 16:10:11.021630 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:10:11.022776 kubelet[2688]: I0129 16:10:11.022741 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:10:11.022861 kubelet[2688]: I0129 16:10:11.022850 2688 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:10:11.022958 kubelet[2688]: I0129 16:10:11.022946 2688 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 16:10:11.023068 kubelet[2688]: E0129 16:10:11.023041 2688 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:10:11.027628 kubelet[2688]: I0129 16:10:11.027585 2688 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:10:11.027719 kubelet[2688]: I0129 16:10:11.027695 2688 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:10:11.032664 kubelet[2688]: I0129 16:10:11.032620 2688 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:10:11.038969 kubelet[2688]: E0129 16:10:11.037245 2688 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:10:11.088170 kubelet[2688]: I0129 16:10:11.088137 2688 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:10:11.088886 kubelet[2688]: I0129 16:10:11.088433 2688 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:10:11.088886 kubelet[2688]: I0129 16:10:11.088491 2688 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:10:11.088886 kubelet[2688]: I0129 16:10:11.088699 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:10:11.088886 kubelet[2688]: I0129 16:10:11.088714 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:10:11.088886 kubelet[2688]: I0129 16:10:11.088740 2688 policy_none.go:49] "None policy: Start" Jan 29 16:10:11.090362 kubelet[2688]: I0129 16:10:11.090331 2688 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:10:11.090486 kubelet[2688]: I0129 16:10:11.090456 2688 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:10:11.090827 kubelet[2688]: I0129 16:10:11.090806 2688 state_mem.go:75] "Updated machine memory state" Jan 29 16:10:11.095770 kubelet[2688]: I0129 16:10:11.095350 2688 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:10:11.095770 kubelet[2688]: I0129 16:10:11.095495 2688 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 16:10:11.095770 kubelet[2688]: I0129 16:10:11.095512 2688 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:10:11.095770 kubelet[2688]: I0129 16:10:11.095699 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:10:11.137100 kubelet[2688]: E0129 16:10:11.136572 2688 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" already exists" pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.200927 kubelet[2688]: I0129 16:10:11.200885 2688 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.214951 kubelet[2688]: I0129 16:10:11.214913 2688 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.215091 kubelet[2688]: I0129 16:10:11.215032 2688 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.219931 kubelet[2688]: I0129 16:10:11.219826 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.219931 kubelet[2688]: I0129 16:10:11.219873 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.219931 kubelet[2688]: I0129 16:10:11.219905 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.220259 kubelet[2688]: I0129 16:10:11.220123 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87bc5600ae7e68cbf76ecc535ad2a727-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-0-1a94fc8352\" (UID: \"87bc5600ae7e68cbf76ecc535ad2a727\") " pod="kube-system/kube-apiserver-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.220259 kubelet[2688]: I0129 16:10:11.220155 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87bc5600ae7e68cbf76ecc535ad2a727-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-0-1a94fc8352\" (UID: \"87bc5600ae7e68cbf76ecc535ad2a727\") " pod="kube-system/kube-apiserver-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.220259 kubelet[2688]: I0129 16:10:11.220187 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.220259 kubelet[2688]: I0129 16:10:11.220213 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87bc5600ae7e68cbf76ecc535ad2a727-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-0-1a94fc8352\" (UID: \"87bc5600ae7e68cbf76ecc535ad2a727\") " pod="kube-system/kube-apiserver-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.220259 kubelet[2688]: I0129 16:10:11.220232 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5c91365a935511c16ef4a2283064428-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-0-1a94fc8352\" (UID: \"b5c91365a935511c16ef4a2283064428\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.220411 kubelet[2688]: I0129 16:10:11.220246 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4a6d171ae69db727e94052249b8fc1e-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-0-1a94fc8352\" (UID: \"c4a6d171ae69db727e94052249b8fc1e\") " pod="kube-system/kube-scheduler-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:11.429072 sudo[2720]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 16:10:11.429692 sudo[2720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 16:10:11.896303 sudo[2720]: pam_unix(sudo:session): session closed for user root Jan 29 16:10:11.980613 kubelet[2688]: I0129 16:10:11.980475 2688 apiserver.go:52] "Watching apiserver" Jan 29 16:10:12.018703 kubelet[2688]: I0129 16:10:12.018649 2688 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 16:10:12.080910 kubelet[2688]: E0129 16:10:12.080873 2688 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-0-0-0-1a94fc8352\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-0-0-1a94fc8352" Jan 29 16:10:12.129639 kubelet[2688]: I0129 16:10:12.129393 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-0-1a94fc8352" podStartSLOduration=1.129373393 podStartE2EDuration="1.129373393s" podCreationTimestamp="2025-01-29 16:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:10:12.129123261 +0000 UTC m=+1.215821374" watchObservedRunningTime="2025-01-29 16:10:12.129373393 +0000 UTC m=+1.216071506" Jan 29 16:10:12.129639 kubelet[2688]: I0129 16:10:12.129519 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-0-1a94fc8352" podStartSLOduration=1.1295131999999999 podStartE2EDuration="1.1295132s" podCreationTimestamp="2025-01-29 16:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:10:12.1144159 +0000 UTC m=+1.201113973" watchObservedRunningTime="2025-01-29 16:10:12.1295132 +0000 UTC m=+1.216211273" Jan 29 16:10:13.610647 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 29 16:10:13.771761 sshd[1787]: Connection closed by 139.178.68.195 port 49730 Jan 29 16:10:13.773373 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 29 16:10:13.778666 systemd[1]: sshd@7-91.107.217.81:22-139.178.68.195:49730.service: Deactivated successfully. Jan 29 16:10:13.782688 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:10:13.783072 systemd[1]: session-7.scope: Consumed 7.723s CPU time, 259.3M memory peak. Jan 29 16:10:13.784485 systemd-logind[1471]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:10:13.785877 systemd-logind[1471]: Removed session 7. Jan 29 16:10:16.758796 kubelet[2688]: I0129 16:10:16.758721 2688 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:10:16.759761 containerd[1493]: time="2025-01-29T16:10:16.759694334Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:10:16.760477 kubelet[2688]: I0129 16:10:16.760000 2688 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:10:17.449946 kubelet[2688]: I0129 16:10:17.449877 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-0-1a94fc8352" podStartSLOduration=8.449860461 podStartE2EDuration="8.449860461s" podCreationTimestamp="2025-01-29 16:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:10:12.164546797 +0000 UTC m=+1.251244870" watchObservedRunningTime="2025-01-29 16:10:17.449860461 +0000 UTC m=+6.536558534" Jan 29 16:10:17.460180 kubelet[2688]: I0129 16:10:17.459183 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/96d91140-f9aa-458f-8c28-d84c5be2f02c-kube-proxy\") pod \"kube-proxy-tx48g\" (UID: \"96d91140-f9aa-458f-8c28-d84c5be2f02c\") " pod="kube-system/kube-proxy-tx48g" Jan 29 16:10:17.460180 kubelet[2688]: I0129 16:10:17.459223 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96d91140-f9aa-458f-8c28-d84c5be2f02c-xtables-lock\") pod \"kube-proxy-tx48g\" (UID: \"96d91140-f9aa-458f-8c28-d84c5be2f02c\") " pod="kube-system/kube-proxy-tx48g" Jan 29 16:10:17.460180 kubelet[2688]: I0129 16:10:17.459240 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96d91140-f9aa-458f-8c28-d84c5be2f02c-lib-modules\") pod \"kube-proxy-tx48g\" (UID: \"96d91140-f9aa-458f-8c28-d84c5be2f02c\") " pod="kube-system/kube-proxy-tx48g" Jan 29 16:10:17.460180 kubelet[2688]: I0129 16:10:17.459256 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn8js\" (UniqueName: \"kubernetes.io/projected/96d91140-f9aa-458f-8c28-d84c5be2f02c-kube-api-access-fn8js\") pod \"kube-proxy-tx48g\" (UID: \"96d91140-f9aa-458f-8c28-d84c5be2f02c\") " pod="kube-system/kube-proxy-tx48g" Jan 29 16:10:17.462869 systemd[1]: Created slice kubepods-besteffort-pod96d91140_f9aa_458f_8c28_d84c5be2f02c.slice - libcontainer container kubepods-besteffort-pod96d91140_f9aa_458f_8c28_d84c5be2f02c.slice. Jan 29 16:10:17.472711 systemd[1]: Created slice kubepods-burstable-podf6a4e58f_5e56_4521_b953_164312632cb3.slice - libcontainer container kubepods-burstable-podf6a4e58f_5e56_4521_b953_164312632cb3.slice. Jan 29 16:10:17.560432 kubelet[2688]: I0129 16:10:17.559739 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-etc-cni-netd\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560432 kubelet[2688]: I0129 16:10:17.559800 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cni-path\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560432 kubelet[2688]: I0129 16:10:17.559827 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6a4e58f-5e56-4521-b953-164312632cb3-clustermesh-secrets\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560432 kubelet[2688]: I0129 16:10:17.559868 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-config-path\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560432 kubelet[2688]: I0129 16:10:17.559893 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjqvh\" (UniqueName: \"kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-kube-api-access-fjqvh\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560432 kubelet[2688]: I0129 16:10:17.559921 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-lib-modules\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560957 kubelet[2688]: I0129 16:10:17.559943 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-host-proc-sys-net\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560957 kubelet[2688]: I0129 16:10:17.560009 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-run\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560957 kubelet[2688]: I0129 16:10:17.560039 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-hostproc\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560957 kubelet[2688]: I0129 16:10:17.560067 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-cgroup\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560957 kubelet[2688]: I0129 16:10:17.560104 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-host-proc-sys-kernel\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.560957 kubelet[2688]: I0129 16:10:17.560141 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-hubble-tls\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.561368 kubelet[2688]: I0129 16:10:17.560167 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-bpf-maps\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.561368 kubelet[2688]: I0129 16:10:17.560194 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-xtables-lock\") pod \"cilium-86r9t\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " pod="kube-system/cilium-86r9t" Jan 29 16:10:17.571395 kubelet[2688]: E0129 16:10:17.571159 2688 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 16:10:17.571395 kubelet[2688]: E0129 16:10:17.571197 2688 projected.go:194] Error preparing data for projected volume kube-api-access-fn8js for pod kube-system/kube-proxy-tx48g: configmap "kube-root-ca.crt" not found Jan 29 16:10:17.571395 kubelet[2688]: E0129 16:10:17.571269 2688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/96d91140-f9aa-458f-8c28-d84c5be2f02c-kube-api-access-fn8js podName:96d91140-f9aa-458f-8c28-d84c5be2f02c nodeName:}" failed. No retries permitted until 2025-01-29 16:10:18.071242054 +0000 UTC m=+7.157940127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fn8js" (UniqueName: "kubernetes.io/projected/96d91140-f9aa-458f-8c28-d84c5be2f02c-kube-api-access-fn8js") pod "kube-proxy-tx48g" (UID: "96d91140-f9aa-458f-8c28-d84c5be2f02c") : configmap "kube-root-ca.crt" not found Jan 29 16:10:17.685055 kubelet[2688]: E0129 16:10:17.684763 2688 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 16:10:17.685055 kubelet[2688]: E0129 16:10:17.684807 2688 projected.go:194] Error preparing data for projected volume kube-api-access-fjqvh for pod kube-system/cilium-86r9t: configmap "kube-root-ca.crt" not found Jan 29 16:10:17.685469 kubelet[2688]: E0129 16:10:17.684866 2688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-kube-api-access-fjqvh podName:f6a4e58f-5e56-4521-b953-164312632cb3 nodeName:}" failed. No retries permitted until 2025-01-29 16:10:18.184847727 +0000 UTC m=+7.271545800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fjqvh" (UniqueName: "kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-kube-api-access-fjqvh") pod "cilium-86r9t" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3") : configmap "kube-root-ca.crt" not found Jan 29 16:10:17.811442 systemd[1]: Created slice kubepods-besteffort-pod1636ed8a_366c_4cef_82cb_84068f96f65d.slice - libcontainer container kubepods-besteffort-pod1636ed8a_366c_4cef_82cb_84068f96f65d.slice. Jan 29 16:10:17.862789 kubelet[2688]: I0129 16:10:17.862668 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1636ed8a-366c-4cef-82cb-84068f96f65d-cilium-config-path\") pod \"cilium-operator-5d85765b45-6sc8w\" (UID: \"1636ed8a-366c-4cef-82cb-84068f96f65d\") " pod="kube-system/cilium-operator-5d85765b45-6sc8w" Jan 29 16:10:17.863479 kubelet[2688]: I0129 16:10:17.863420 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf9zb\" (UniqueName: \"kubernetes.io/projected/1636ed8a-366c-4cef-82cb-84068f96f65d-kube-api-access-zf9zb\") pod \"cilium-operator-5d85765b45-6sc8w\" (UID: \"1636ed8a-366c-4cef-82cb-84068f96f65d\") " pod="kube-system/cilium-operator-5d85765b45-6sc8w" Jan 29 16:10:18.116764 containerd[1493]: time="2025-01-29T16:10:18.116708305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6sc8w,Uid:1636ed8a-366c-4cef-82cb-84068f96f65d,Namespace:kube-system,Attempt:0,}" Jan 29 16:10:18.143364 containerd[1493]: time="2025-01-29T16:10:18.142979912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:10:18.143364 containerd[1493]: time="2025-01-29T16:10:18.143273964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:10:18.143364 containerd[1493]: time="2025-01-29T16:10:18.143325846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:18.144136 containerd[1493]: time="2025-01-29T16:10:18.144097037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:18.170537 systemd[1]: Started cri-containerd-17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98.scope - libcontainer container 17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98. Jan 29 16:10:18.202399 containerd[1493]: time="2025-01-29T16:10:18.202362198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6sc8w,Uid:1636ed8a-366c-4cef-82cb-84068f96f65d,Namespace:kube-system,Attempt:0,} returns sandbox id \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\"" Jan 29 16:10:18.206221 containerd[1493]: time="2025-01-29T16:10:18.206168670Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 16:10:18.369486 containerd[1493]: time="2025-01-29T16:10:18.369219086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tx48g,Uid:96d91140-f9aa-458f-8c28-d84c5be2f02c,Namespace:kube-system,Attempt:0,}" Jan 29 16:10:18.377776 containerd[1493]: time="2025-01-29T16:10:18.377714624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86r9t,Uid:f6a4e58f-5e56-4521-b953-164312632cb3,Namespace:kube-system,Attempt:0,}" Jan 29 16:10:18.394541 containerd[1493]: time="2025-01-29T16:10:18.393608617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:10:18.394541 containerd[1493]: time="2025-01-29T16:10:18.394233362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:10:18.394541 containerd[1493]: time="2025-01-29T16:10:18.394246803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:18.394541 containerd[1493]: time="2025-01-29T16:10:18.394380768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:18.418521 systemd[1]: Started cri-containerd-7fc1ca81ae8d886af7a51b3e150fac5717cd2c6bbf514ed3c78fcf10fad288d2.scope - libcontainer container 7fc1ca81ae8d886af7a51b3e150fac5717cd2c6bbf514ed3c78fcf10fad288d2. Jan 29 16:10:18.429498 containerd[1493]: time="2025-01-29T16:10:18.426892303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:10:18.429498 containerd[1493]: time="2025-01-29T16:10:18.427028429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:10:18.429498 containerd[1493]: time="2025-01-29T16:10:18.427136273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:18.430458 containerd[1493]: time="2025-01-29T16:10:18.430378442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:18.455589 systemd[1]: Started cri-containerd-9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d.scope - libcontainer container 9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d. Jan 29 16:10:18.457547 containerd[1493]: time="2025-01-29T16:10:18.457394078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tx48g,Uid:96d91140-f9aa-458f-8c28-d84c5be2f02c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fc1ca81ae8d886af7a51b3e150fac5717cd2c6bbf514ed3c78fcf10fad288d2\"" Jan 29 16:10:18.462042 containerd[1493]: time="2025-01-29T16:10:18.461897578Z" level=info msg="CreateContainer within sandbox \"7fc1ca81ae8d886af7a51b3e150fac5717cd2c6bbf514ed3c78fcf10fad288d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:10:18.488118 containerd[1493]: time="2025-01-29T16:10:18.488060220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-86r9t,Uid:f6a4e58f-5e56-4521-b953-164312632cb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\"" Jan 29 16:10:18.488895 containerd[1493]: time="2025-01-29T16:10:18.488742127Z" level=info msg="CreateContainer within sandbox \"7fc1ca81ae8d886af7a51b3e150fac5717cd2c6bbf514ed3c78fcf10fad288d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60f28613fa72712e77ae4f28cd1fdf2e5cc157eae8748e1781eff23a81bd3c21\"" Jan 29 16:10:18.490815 containerd[1493]: time="2025-01-29T16:10:18.489554520Z" level=info msg="StartContainer for \"60f28613fa72712e77ae4f28cd1fdf2e5cc157eae8748e1781eff23a81bd3c21\"" Jan 29 16:10:18.526530 systemd[1]: Started cri-containerd-60f28613fa72712e77ae4f28cd1fdf2e5cc157eae8748e1781eff23a81bd3c21.scope - libcontainer container 60f28613fa72712e77ae4f28cd1fdf2e5cc157eae8748e1781eff23a81bd3c21. Jan 29 16:10:18.559074 containerd[1493]: time="2025-01-29T16:10:18.558999326Z" level=info msg="StartContainer for \"60f28613fa72712e77ae4f28cd1fdf2e5cc157eae8748e1781eff23a81bd3c21\" returns successfully" Jan 29 16:10:19.101372 kubelet[2688]: I0129 16:10:19.101289 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tx48g" podStartSLOduration=2.101106403 podStartE2EDuration="2.101106403s" podCreationTimestamp="2025-01-29 16:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:10:19.10078007 +0000 UTC m=+8.187478143" watchObservedRunningTime="2025-01-29 16:10:19.101106403 +0000 UTC m=+8.187804476" Jan 29 16:10:20.157950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207469316.mount: Deactivated successfully. Jan 29 16:10:20.492322 containerd[1493]: time="2025-01-29T16:10:20.492029491Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:10:20.493915 containerd[1493]: time="2025-01-29T16:10:20.493860880Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 16:10:20.494967 containerd[1493]: time="2025-01-29T16:10:20.494907199Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:10:20.496273 containerd[1493]: time="2025-01-29T16:10:20.496134845Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.289911774s" Jan 29 16:10:20.496273 containerd[1493]: time="2025-01-29T16:10:20.496169566Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 16:10:20.498915 containerd[1493]: time="2025-01-29T16:10:20.498724982Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 16:10:20.501010 containerd[1493]: time="2025-01-29T16:10:20.499942628Z" level=info msg="CreateContainer within sandbox \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 16:10:20.524831 containerd[1493]: time="2025-01-29T16:10:20.524780599Z" level=info msg="CreateContainer within sandbox \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\"" Jan 29 16:10:20.525693 containerd[1493]: time="2025-01-29T16:10:20.525571949Z" level=info msg="StartContainer for \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\"" Jan 29 16:10:20.556544 systemd[1]: Started cri-containerd-051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de.scope - libcontainer container 051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de. Jan 29 16:10:20.587936 containerd[1493]: time="2025-01-29T16:10:20.587885125Z" level=info msg="StartContainer for \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\" returns successfully" Jan 29 16:10:22.978112 kubelet[2688]: I0129 16:10:22.978033 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6sc8w" podStartSLOduration=3.6850219539999998 podStartE2EDuration="5.978016286s" podCreationTimestamp="2025-01-29 16:10:17 +0000 UTC" firstStartedPulling="2025-01-29 16:10:18.204907699 +0000 UTC m=+7.291605772" lastFinishedPulling="2025-01-29 16:10:20.497902071 +0000 UTC m=+9.584600104" observedRunningTime="2025-01-29 16:10:21.169476948 +0000 UTC m=+10.256175061" watchObservedRunningTime="2025-01-29 16:10:22.978016286 +0000 UTC m=+12.064714359" Jan 29 16:10:29.836660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505324527.mount: Deactivated successfully. Jan 29 16:10:31.350085 containerd[1493]: time="2025-01-29T16:10:31.349950844Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:10:31.352071 containerd[1493]: time="2025-01-29T16:10:31.351753856Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 16:10:31.353185 containerd[1493]: time="2025-01-29T16:10:31.353121735Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:10:31.355808 containerd[1493]: time="2025-01-29T16:10:31.355647888Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.856892465s" Jan 29 16:10:31.355808 containerd[1493]: time="2025-01-29T16:10:31.355696770Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 16:10:31.362645 containerd[1493]: time="2025-01-29T16:10:31.362592048Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:10:31.385320 containerd[1493]: time="2025-01-29T16:10:31.385267782Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\"" Jan 29 16:10:31.386041 containerd[1493]: time="2025-01-29T16:10:31.386009403Z" level=info msg="StartContainer for \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\"" Jan 29 16:10:31.423620 systemd[1]: Started cri-containerd-4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75.scope - libcontainer container 4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75. Jan 29 16:10:31.453684 containerd[1493]: time="2025-01-29T16:10:31.453608951Z" level=info msg="StartContainer for \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\" returns successfully" Jan 29 16:10:31.468058 systemd[1]: cri-containerd-4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75.scope: Deactivated successfully. Jan 29 16:10:31.550178 containerd[1493]: time="2025-01-29T16:10:31.549866884Z" level=info msg="shim disconnected" id=4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75 namespace=k8s.io Jan 29 16:10:31.550178 containerd[1493]: time="2025-01-29T16:10:31.549988288Z" level=warning msg="cleaning up after shim disconnected" id=4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75 namespace=k8s.io Jan 29 16:10:31.550178 containerd[1493]: time="2025-01-29T16:10:31.550000048Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:10:31.561234 containerd[1493]: time="2025-01-29T16:10:31.561169570Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:10:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:10:32.131496 containerd[1493]: time="2025-01-29T16:10:32.131442696Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:10:32.143310 containerd[1493]: time="2025-01-29T16:10:32.143247390Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\"" Jan 29 16:10:32.145871 containerd[1493]: time="2025-01-29T16:10:32.143972251Z" level=info msg="StartContainer for \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\"" Jan 29 16:10:32.176543 systemd[1]: Started cri-containerd-3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331.scope - libcontainer container 3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331. Jan 29 16:10:32.205433 containerd[1493]: time="2025-01-29T16:10:32.205301146Z" level=info msg="StartContainer for \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\" returns successfully" Jan 29 16:10:32.222636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:10:32.222884 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:10:32.223625 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:10:32.231021 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:10:32.231536 systemd[1]: cri-containerd-3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331.scope: Deactivated successfully. Jan 29 16:10:32.260500 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:10:32.265725 containerd[1493]: time="2025-01-29T16:10:32.265202480Z" level=info msg="shim disconnected" id=3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331 namespace=k8s.io Jan 29 16:10:32.265725 containerd[1493]: time="2025-01-29T16:10:32.265268922Z" level=warning msg="cleaning up after shim disconnected" id=3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331 namespace=k8s.io Jan 29 16:10:32.265725 containerd[1493]: time="2025-01-29T16:10:32.265280843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:10:32.377614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75-rootfs.mount: Deactivated successfully. Jan 29 16:10:33.137558 containerd[1493]: time="2025-01-29T16:10:33.137308650Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:10:33.159999 containerd[1493]: time="2025-01-29T16:10:33.159853276Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\"" Jan 29 16:10:33.161993 containerd[1493]: time="2025-01-29T16:10:33.161962175Z" level=info msg="StartContainer for \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\"" Jan 29 16:10:33.198733 systemd[1]: Started cri-containerd-676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4.scope - libcontainer container 676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4. Jan 29 16:10:33.238184 containerd[1493]: time="2025-01-29T16:10:33.238083491Z" level=info msg="StartContainer for \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\" returns successfully" Jan 29 16:10:33.242508 systemd[1]: cri-containerd-676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4.scope: Deactivated successfully. Jan 29 16:10:33.267461 containerd[1493]: time="2025-01-29T16:10:33.266279395Z" level=info msg="shim disconnected" id=676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4 namespace=k8s.io Jan 29 16:10:33.267461 containerd[1493]: time="2025-01-29T16:10:33.266533362Z" level=warning msg="cleaning up after shim disconnected" id=676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4 namespace=k8s.io Jan 29 16:10:33.267461 containerd[1493]: time="2025-01-29T16:10:33.266553122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:10:33.376520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4-rootfs.mount: Deactivated successfully. Jan 29 16:10:34.141042 containerd[1493]: time="2025-01-29T16:10:34.140977284Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:10:34.163840 containerd[1493]: time="2025-01-29T16:10:34.163700305Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\"" Jan 29 16:10:34.164811 containerd[1493]: time="2025-01-29T16:10:34.164591769Z" level=info msg="StartContainer for \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\"" Jan 29 16:10:34.205712 systemd[1]: Started cri-containerd-50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a.scope - libcontainer container 50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a. Jan 29 16:10:34.239880 systemd[1]: cri-containerd-50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a.scope: Deactivated successfully. Jan 29 16:10:34.243754 containerd[1493]: time="2025-01-29T16:10:34.243619410Z" level=info msg="StartContainer for \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\" returns successfully" Jan 29 16:10:34.269185 containerd[1493]: time="2025-01-29T16:10:34.269080226Z" level=info msg="shim disconnected" id=50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a namespace=k8s.io Jan 29 16:10:34.269185 containerd[1493]: time="2025-01-29T16:10:34.269144068Z" level=warning msg="cleaning up after shim disconnected" id=50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a namespace=k8s.io Jan 29 16:10:34.269185 containerd[1493]: time="2025-01-29T16:10:34.269158028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:10:34.375369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a-rootfs.mount: Deactivated successfully. Jan 29 16:10:35.152535 containerd[1493]: time="2025-01-29T16:10:35.152479149Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:10:35.181100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715906574.mount: Deactivated successfully. Jan 29 16:10:35.184197 containerd[1493]: time="2025-01-29T16:10:35.184089880Z" level=info msg="CreateContainer within sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\"" Jan 29 16:10:35.185538 containerd[1493]: time="2025-01-29T16:10:35.184720296Z" level=info msg="StartContainer for \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\"" Jan 29 16:10:35.223727 systemd[1]: Started cri-containerd-59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8.scope - libcontainer container 59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8. Jan 29 16:10:35.261042 containerd[1493]: time="2025-01-29T16:10:35.260919347Z" level=info msg="StartContainer for \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\" returns successfully" Jan 29 16:10:35.364363 kubelet[2688]: I0129 16:10:35.363445 2688 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 16:10:35.418174 systemd[1]: Created slice kubepods-burstable-podaf46a2ab_971c_43c0_8c12_addf866cc2b0.slice - libcontainer container kubepods-burstable-podaf46a2ab_971c_43c0_8c12_addf866cc2b0.slice. Jan 29 16:10:35.427057 systemd[1]: Created slice kubepods-burstable-podeb403f68_c50b_45b5_9488_28617d40e96c.slice - libcontainer container kubepods-burstable-podeb403f68_c50b_45b5_9488_28617d40e96c.slice. Jan 29 16:10:35.487544 kubelet[2688]: I0129 16:10:35.487506 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtsmb\" (UniqueName: \"kubernetes.io/projected/eb403f68-c50b-45b5-9488-28617d40e96c-kube-api-access-gtsmb\") pod \"coredns-6f6b679f8f-lrnzs\" (UID: \"eb403f68-c50b-45b5-9488-28617d40e96c\") " pod="kube-system/coredns-6f6b679f8f-lrnzs" Jan 29 16:10:35.487773 kubelet[2688]: I0129 16:10:35.487622 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g27hs\" (UniqueName: \"kubernetes.io/projected/af46a2ab-971c-43c0-8c12-addf866cc2b0-kube-api-access-g27hs\") pod \"coredns-6f6b679f8f-s76g8\" (UID: \"af46a2ab-971c-43c0-8c12-addf866cc2b0\") " pod="kube-system/coredns-6f6b679f8f-s76g8" Jan 29 16:10:35.487877 kubelet[2688]: I0129 16:10:35.487652 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af46a2ab-971c-43c0-8c12-addf866cc2b0-config-volume\") pod \"coredns-6f6b679f8f-s76g8\" (UID: \"af46a2ab-971c-43c0-8c12-addf866cc2b0\") " pod="kube-system/coredns-6f6b679f8f-s76g8" Jan 29 16:10:35.487877 kubelet[2688]: I0129 16:10:35.487849 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb403f68-c50b-45b5-9488-28617d40e96c-config-volume\") pod \"coredns-6f6b679f8f-lrnzs\" (UID: \"eb403f68-c50b-45b5-9488-28617d40e96c\") " pod="kube-system/coredns-6f6b679f8f-lrnzs" Jan 29 16:10:35.723567 containerd[1493]: time="2025-01-29T16:10:35.723379709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s76g8,Uid:af46a2ab-971c-43c0-8c12-addf866cc2b0,Namespace:kube-system,Attempt:0,}" Jan 29 16:10:35.736800 containerd[1493]: time="2025-01-29T16:10:35.736503262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lrnzs,Uid:eb403f68-c50b-45b5-9488-28617d40e96c,Namespace:kube-system,Attempt:0,}" Jan 29 16:10:36.179181 kubelet[2688]: I0129 16:10:36.179108 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-86r9t" podStartSLOduration=6.313619328 podStartE2EDuration="19.179089898s" podCreationTimestamp="2025-01-29 16:10:17 +0000 UTC" firstStartedPulling="2025-01-29 16:10:18.491421554 +0000 UTC m=+7.578119627" lastFinishedPulling="2025-01-29 16:10:31.356892124 +0000 UTC m=+20.443590197" observedRunningTime="2025-01-29 16:10:36.178108192 +0000 UTC m=+25.264806305" watchObservedRunningTime="2025-01-29 16:10:36.179089898 +0000 UTC m=+25.265787971" Jan 29 16:10:37.423087 systemd-networkd[1391]: cilium_host: Link UP Jan 29 16:10:37.423272 systemd-networkd[1391]: cilium_net: Link UP Jan 29 16:10:37.423466 systemd-networkd[1391]: cilium_net: Gained carrier Jan 29 16:10:37.423599 systemd-networkd[1391]: cilium_host: Gained carrier Jan 29 16:10:37.528523 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 29 16:10:37.528532 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 29 16:10:37.823495 kernel: NET: Registered PF_ALG protocol family Jan 29 16:10:38.124973 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 29 16:10:38.253077 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 29 16:10:38.540118 systemd-networkd[1391]: lxc_health: Link UP Jan 29 16:10:38.542167 systemd-networkd[1391]: lxc_health: Gained carrier Jan 29 16:10:38.793413 kernel: eth0: renamed from tmpdd303 Jan 29 16:10:38.798070 systemd-networkd[1391]: lxc3beebdf84e5f: Link UP Jan 29 16:10:38.807951 systemd-networkd[1391]: lxc3beebdf84e5f: Gained carrier Jan 29 16:10:38.808071 systemd-networkd[1391]: lxcb52182f83c38: Link UP Jan 29 16:10:38.809509 kernel: eth0: renamed from tmp2d645 Jan 29 16:10:38.818061 systemd-networkd[1391]: lxcb52182f83c38: Gained carrier Jan 29 16:10:39.532972 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 29 16:10:39.852722 systemd-networkd[1391]: lxc3beebdf84e5f: Gained IPv6LL Jan 29 16:10:40.428557 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 29 16:10:40.492534 systemd-networkd[1391]: lxcb52182f83c38: Gained IPv6LL Jan 29 16:10:42.937360 containerd[1493]: time="2025-01-29T16:10:42.931780935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:10:42.937360 containerd[1493]: time="2025-01-29T16:10:42.931837256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:10:42.937360 containerd[1493]: time="2025-01-29T16:10:42.931851936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:42.937360 containerd[1493]: time="2025-01-29T16:10:42.933227850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:42.960400 containerd[1493]: time="2025-01-29T16:10:42.958959522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:10:42.960400 containerd[1493]: time="2025-01-29T16:10:42.960080949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:10:42.960400 containerd[1493]: time="2025-01-29T16:10:42.960094470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:42.960400 containerd[1493]: time="2025-01-29T16:10:42.960187752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:10:42.977575 systemd[1]: Started cri-containerd-2d64585bdc88ddd888808bf7051f46bd83d91a88da2709e59430ec457c356a94.scope - libcontainer container 2d64585bdc88ddd888808bf7051f46bd83d91a88da2709e59430ec457c356a94. Jan 29 16:10:42.986881 systemd[1]: Started cri-containerd-dd3036102ad062b7c758f897c5baeb978411d543ad855c87effb2e78c6bba9c0.scope - libcontainer container dd3036102ad062b7c758f897c5baeb978411d543ad855c87effb2e78c6bba9c0. Jan 29 16:10:43.035754 containerd[1493]: time="2025-01-29T16:10:43.035676156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-s76g8,Uid:af46a2ab-971c-43c0-8c12-addf866cc2b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d64585bdc88ddd888808bf7051f46bd83d91a88da2709e59430ec457c356a94\"" Jan 29 16:10:43.041975 containerd[1493]: time="2025-01-29T16:10:43.041493817Z" level=info msg="CreateContainer within sandbox \"2d64585bdc88ddd888808bf7051f46bd83d91a88da2709e59430ec457c356a94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:10:43.044902 containerd[1493]: time="2025-01-29T16:10:43.044840858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lrnzs,Uid:eb403f68-c50b-45b5-9488-28617d40e96c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd3036102ad062b7c758f897c5baeb978411d543ad855c87effb2e78c6bba9c0\"" Jan 29 16:10:43.050351 containerd[1493]: time="2025-01-29T16:10:43.050209509Z" level=info msg="CreateContainer within sandbox \"dd3036102ad062b7c758f897c5baeb978411d543ad855c87effb2e78c6bba9c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:10:43.079398 containerd[1493]: time="2025-01-29T16:10:43.079329856Z" level=info msg="CreateContainer within sandbox \"2d64585bdc88ddd888808bf7051f46bd83d91a88da2709e59430ec457c356a94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d11b68a56abac13949978e98d9f9d4befbb77b185e93c75ca3e4f2df16c9c25\"" Jan 29 16:10:43.082280 containerd[1493]: time="2025-01-29T16:10:43.080358721Z" level=info msg="StartContainer for \"3d11b68a56abac13949978e98d9f9d4befbb77b185e93c75ca3e4f2df16c9c25\"" Jan 29 16:10:43.085893 containerd[1493]: time="2025-01-29T16:10:43.085860855Z" level=info msg="CreateContainer within sandbox \"dd3036102ad062b7c758f897c5baeb978411d543ad855c87effb2e78c6bba9c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a5e03b6bf588dd6aea58d40d4f5521c420602184b494734a67df990dfdbf215\"" Jan 29 16:10:43.087473 containerd[1493]: time="2025-01-29T16:10:43.087426893Z" level=info msg="StartContainer for \"5a5e03b6bf588dd6aea58d40d4f5521c420602184b494734a67df990dfdbf215\"" Jan 29 16:10:43.113521 systemd[1]: Started cri-containerd-3d11b68a56abac13949978e98d9f9d4befbb77b185e93c75ca3e4f2df16c9c25.scope - libcontainer container 3d11b68a56abac13949978e98d9f9d4befbb77b185e93c75ca3e4f2df16c9c25. Jan 29 16:10:43.149616 systemd[1]: Started cri-containerd-5a5e03b6bf588dd6aea58d40d4f5521c420602184b494734a67df990dfdbf215.scope - libcontainer container 5a5e03b6bf588dd6aea58d40d4f5521c420602184b494734a67df990dfdbf215. Jan 29 16:10:43.163674 containerd[1493]: time="2025-01-29T16:10:43.163501261Z" level=info msg="StartContainer for \"3d11b68a56abac13949978e98d9f9d4befbb77b185e93c75ca3e4f2df16c9c25\" returns successfully" Jan 29 16:10:43.196999 containerd[1493]: time="2025-01-29T16:10:43.196227576Z" level=info msg="StartContainer for \"5a5e03b6bf588dd6aea58d40d4f5521c420602184b494734a67df990dfdbf215\" returns successfully" Jan 29 16:10:43.211933 kubelet[2688]: I0129 16:10:43.211724 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-s76g8" podStartSLOduration=26.211705312 podStartE2EDuration="26.211705312s" podCreationTimestamp="2025-01-29 16:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:10:43.211597389 +0000 UTC m=+32.298295462" watchObservedRunningTime="2025-01-29 16:10:43.211705312 +0000 UTC m=+32.298403385" Jan 29 16:10:44.210249 kubelet[2688]: I0129 16:10:44.210157 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lrnzs" podStartSLOduration=27.210135155 podStartE2EDuration="27.210135155s" podCreationTimestamp="2025-01-29 16:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:10:44.208985167 +0000 UTC m=+33.295683320" watchObservedRunningTime="2025-01-29 16:10:44.210135155 +0000 UTC m=+33.296833228" Jan 29 16:13:42.970771 systemd[1]: Started sshd@8-91.107.217.81:22-198.235.24.126:62790.service - OpenSSH per-connection server daemon (198.235.24.126:62790). Jan 29 16:13:48.078784 sshd[4097]: Connection reset by 198.235.24.126 port 62790 [preauth] Jan 29 16:13:48.080621 systemd[1]: sshd@8-91.107.217.81:22-198.235.24.126:62790.service: Deactivated successfully. Jan 29 16:14:29.791842 systemd[1]: Started sshd@9-91.107.217.81:22-195.3.147.83:60498.service - OpenSSH per-connection server daemon (195.3.147.83:60498). Jan 29 16:14:30.208712 sshd[4112]: Invalid user admin from 195.3.147.83 port 60498 Jan 29 16:14:30.323421 sshd[4112]: Connection closed by invalid user admin 195.3.147.83 port 60498 [preauth] Jan 29 16:14:30.324910 systemd[1]: sshd@9-91.107.217.81:22-195.3.147.83:60498.service: Deactivated successfully. Jan 29 16:14:59.965759 systemd[1]: Started sshd@10-91.107.217.81:22-139.178.68.195:38628.service - OpenSSH per-connection server daemon (139.178.68.195:38628). Jan 29 16:15:00.948490 sshd[4119]: Accepted publickey for core from 139.178.68.195 port 38628 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:00.950630 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:00.956044 systemd-logind[1471]: New session 8 of user core. Jan 29 16:15:00.962637 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:15:01.722128 sshd[4121]: Connection closed by 139.178.68.195 port 38628 Jan 29 16:15:01.723095 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:01.732776 systemd[1]: sshd@10-91.107.217.81:22-139.178.68.195:38628.service: Deactivated successfully. Jan 29 16:15:01.736855 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:15:01.741498 systemd-logind[1471]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:15:01.742974 systemd-logind[1471]: Removed session 8. Jan 29 16:15:06.896608 systemd[1]: Started sshd@11-91.107.217.81:22-139.178.68.195:58720.service - OpenSSH per-connection server daemon (139.178.68.195:58720). Jan 29 16:15:07.880405 sshd[4134]: Accepted publickey for core from 139.178.68.195 port 58720 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:07.882750 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:07.888423 systemd-logind[1471]: New session 9 of user core. Jan 29 16:15:07.891563 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:15:08.647409 sshd[4136]: Connection closed by 139.178.68.195 port 58720 Jan 29 16:15:08.648158 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:08.653987 systemd-logind[1471]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:15:08.654172 systemd[1]: sshd@11-91.107.217.81:22-139.178.68.195:58720.service: Deactivated successfully. Jan 29 16:15:08.657643 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:15:08.659295 systemd-logind[1471]: Removed session 9. Jan 29 16:15:13.830807 systemd[1]: Started sshd@12-91.107.217.81:22-139.178.68.195:58734.service - OpenSSH per-connection server daemon (139.178.68.195:58734). Jan 29 16:15:14.824432 sshd[4151]: Accepted publickey for core from 139.178.68.195 port 58734 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:14.826758 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:14.831971 systemd-logind[1471]: New session 10 of user core. Jan 29 16:15:14.845767 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:15:15.596567 sshd[4153]: Connection closed by 139.178.68.195 port 58734 Jan 29 16:15:15.596452 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:15.602550 systemd-logind[1471]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:15:15.603554 systemd[1]: sshd@12-91.107.217.81:22-139.178.68.195:58734.service: Deactivated successfully. Jan 29 16:15:15.605640 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:15:15.607521 systemd-logind[1471]: Removed session 10. Jan 29 16:15:15.773622 systemd[1]: Started sshd@13-91.107.217.81:22-139.178.68.195:41018.service - OpenSSH per-connection server daemon (139.178.68.195:41018). Jan 29 16:15:16.767077 sshd[4166]: Accepted publickey for core from 139.178.68.195 port 41018 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:16.768755 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:16.775460 systemd-logind[1471]: New session 11 of user core. Jan 29 16:15:16.779568 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:15:17.579964 sshd[4168]: Connection closed by 139.178.68.195 port 41018 Jan 29 16:15:17.580549 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:17.586993 systemd[1]: sshd@13-91.107.217.81:22-139.178.68.195:41018.service: Deactivated successfully. Jan 29 16:15:17.590683 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:15:17.592017 systemd-logind[1471]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:15:17.593302 systemd-logind[1471]: Removed session 11. Jan 29 16:15:17.762678 systemd[1]: Started sshd@14-91.107.217.81:22-139.178.68.195:41032.service - OpenSSH per-connection server daemon (139.178.68.195:41032). Jan 29 16:15:18.762097 sshd[4178]: Accepted publickey for core from 139.178.68.195 port 41032 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:18.766542 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:18.772735 systemd-logind[1471]: New session 12 of user core. Jan 29 16:15:18.780611 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:15:18.853469 update_engine[1472]: I20250129 16:15:18.853384 1472 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 16:15:18.853469 update_engine[1472]: I20250129 16:15:18.853451 1472 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 16:15:18.854089 update_engine[1472]: I20250129 16:15:18.853721 1472 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 16:15:18.855397 update_engine[1472]: I20250129 16:15:18.855022 1472 omaha_request_params.cc:62] Current group set to alpha Jan 29 16:15:18.855397 update_engine[1472]: I20250129 16:15:18.855270 1472 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 16:15:18.855397 update_engine[1472]: I20250129 16:15:18.855295 1472 update_attempter.cc:643] Scheduling an action processor start. Jan 29 16:15:18.855397 update_engine[1472]: I20250129 16:15:18.855325 1472 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:15:18.855632 update_engine[1472]: I20250129 16:15:18.855420 1472 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 16:15:18.855632 update_engine[1472]: I20250129 16:15:18.855512 1472 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:15:18.855632 update_engine[1472]: I20250129 16:15:18.855527 1472 omaha_request_action.cc:272] Request: Jan 29 16:15:18.855632 update_engine[1472]: Jan 29 16:15:18.855632 update_engine[1472]: Jan 29 16:15:18.855632 update_engine[1472]: Jan 29 16:15:18.855632 update_engine[1472]: Jan 29 16:15:18.855632 update_engine[1472]: Jan 29 16:15:18.855632 update_engine[1472]: Jan 29 16:15:18.855632 update_engine[1472]: Jan 29 16:15:18.855632 update_engine[1472]: Jan 29 16:15:18.855632 update_engine[1472]: I20250129 16:15:18.855540 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:15:18.856995 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 16:15:18.857942 update_engine[1472]: I20250129 16:15:18.857877 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:15:18.858452 update_engine[1472]: I20250129 16:15:18.858377 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:15:18.859290 update_engine[1472]: E20250129 16:15:18.859236 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:15:18.859371 update_engine[1472]: I20250129 16:15:18.859304 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 16:15:19.529810 sshd[4182]: Connection closed by 139.178.68.195 port 41032 Jan 29 16:15:19.532246 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:19.540313 systemd-logind[1471]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:15:19.541007 systemd[1]: sshd@14-91.107.217.81:22-139.178.68.195:41032.service: Deactivated successfully. Jan 29 16:15:19.545681 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:15:19.546799 systemd-logind[1471]: Removed session 12. Jan 29 16:15:24.722863 systemd[1]: Started sshd@15-91.107.217.81:22-139.178.68.195:41042.service - OpenSSH per-connection server daemon (139.178.68.195:41042). Jan 29 16:15:25.717970 sshd[4194]: Accepted publickey for core from 139.178.68.195 port 41042 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:25.719808 sshd-session[4194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:25.726123 systemd-logind[1471]: New session 13 of user core. Jan 29 16:15:25.730515 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:15:26.474423 sshd[4196]: Connection closed by 139.178.68.195 port 41042 Jan 29 16:15:26.475202 sshd-session[4194]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:26.479740 systemd[1]: sshd@15-91.107.217.81:22-139.178.68.195:41042.service: Deactivated successfully. Jan 29 16:15:26.481849 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:15:26.482858 systemd-logind[1471]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:15:26.484327 systemd-logind[1471]: Removed session 13. Jan 29 16:15:28.856468 update_engine[1472]: I20250129 16:15:28.855664 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:15:28.856468 update_engine[1472]: I20250129 16:15:28.855988 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:15:28.856468 update_engine[1472]: I20250129 16:15:28.856320 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:15:28.857201 update_engine[1472]: E20250129 16:15:28.857160 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:15:28.857370 update_engine[1472]: I20250129 16:15:28.857314 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 16:15:31.661158 systemd[1]: Started sshd@16-91.107.217.81:22-139.178.68.195:33258.service - OpenSSH per-connection server daemon (139.178.68.195:33258). Jan 29 16:15:32.656367 sshd[4207]: Accepted publickey for core from 139.178.68.195 port 33258 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:32.658453 sshd-session[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:32.665587 systemd-logind[1471]: New session 14 of user core. Jan 29 16:15:32.672686 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:15:33.428073 sshd[4209]: Connection closed by 139.178.68.195 port 33258 Jan 29 16:15:33.428955 sshd-session[4207]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:33.434362 systemd[1]: sshd@16-91.107.217.81:22-139.178.68.195:33258.service: Deactivated successfully. Jan 29 16:15:33.437777 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:15:33.439069 systemd-logind[1471]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:15:33.440929 systemd-logind[1471]: Removed session 14. Jan 29 16:15:33.609865 systemd[1]: Started sshd@17-91.107.217.81:22-139.178.68.195:33264.service - OpenSSH per-connection server daemon (139.178.68.195:33264). Jan 29 16:15:34.606867 sshd[4221]: Accepted publickey for core from 139.178.68.195 port 33264 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:34.608774 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:34.614834 systemd-logind[1471]: New session 15 of user core. Jan 29 16:15:34.621640 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:15:35.422579 sshd[4223]: Connection closed by 139.178.68.195 port 33264 Jan 29 16:15:35.423929 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:35.429642 systemd-logind[1471]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:15:35.429793 systemd[1]: sshd@17-91.107.217.81:22-139.178.68.195:33264.service: Deactivated successfully. Jan 29 16:15:35.431860 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:15:35.435377 systemd-logind[1471]: Removed session 15. Jan 29 16:15:35.599760 systemd[1]: Started sshd@18-91.107.217.81:22-139.178.68.195:45808.service - OpenSSH per-connection server daemon (139.178.68.195:45808). Jan 29 16:15:36.588122 sshd[4233]: Accepted publickey for core from 139.178.68.195 port 45808 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:36.589699 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:36.594946 systemd-logind[1471]: New session 16 of user core. Jan 29 16:15:36.608125 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:15:38.854646 update_engine[1472]: I20250129 16:15:38.854583 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:15:38.854985 update_engine[1472]: I20250129 16:15:38.854790 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:15:38.855009 update_engine[1472]: I20250129 16:15:38.854983 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:15:38.856101 update_engine[1472]: E20250129 16:15:38.856007 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:15:38.856101 update_engine[1472]: I20250129 16:15:38.856071 1472 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 16:15:39.001378 sshd[4235]: Connection closed by 139.178.68.195 port 45808 Jan 29 16:15:39.002563 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:39.007949 systemd[1]: sshd@18-91.107.217.81:22-139.178.68.195:45808.service: Deactivated successfully. Jan 29 16:15:39.012889 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:15:39.017442 systemd-logind[1471]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:15:39.018983 systemd-logind[1471]: Removed session 16. Jan 29 16:15:39.179862 systemd[1]: Started sshd@19-91.107.217.81:22-139.178.68.195:45816.service - OpenSSH per-connection server daemon (139.178.68.195:45816). Jan 29 16:15:40.164687 sshd[4254]: Accepted publickey for core from 139.178.68.195 port 45816 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:40.166738 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:40.172374 systemd-logind[1471]: New session 17 of user core. Jan 29 16:15:40.177591 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:15:41.046321 sshd[4256]: Connection closed by 139.178.68.195 port 45816 Jan 29 16:15:41.048570 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:41.052085 systemd[1]: sshd@19-91.107.217.81:22-139.178.68.195:45816.service: Deactivated successfully. Jan 29 16:15:41.055636 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:15:41.057904 systemd-logind[1471]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:15:41.059077 systemd-logind[1471]: Removed session 17. Jan 29 16:15:41.228960 systemd[1]: Started sshd@20-91.107.217.81:22-139.178.68.195:45832.service - OpenSSH per-connection server daemon (139.178.68.195:45832). Jan 29 16:15:42.233150 sshd[4266]: Accepted publickey for core from 139.178.68.195 port 45832 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:42.235618 sshd-session[4266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:42.242657 systemd-logind[1471]: New session 18 of user core. Jan 29 16:15:42.249656 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:15:42.994186 sshd[4268]: Connection closed by 139.178.68.195 port 45832 Jan 29 16:15:42.996694 sshd-session[4266]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:43.002590 systemd[1]: sshd@20-91.107.217.81:22-139.178.68.195:45832.service: Deactivated successfully. Jan 29 16:15:43.004607 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:15:43.006762 systemd-logind[1471]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:15:43.010085 systemd-logind[1471]: Removed session 18. Jan 29 16:15:48.176728 systemd[1]: Started sshd@21-91.107.217.81:22-139.178.68.195:53014.service - OpenSSH per-connection server daemon (139.178.68.195:53014). Jan 29 16:15:48.859509 update_engine[1472]: I20250129 16:15:48.858901 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:15:48.859509 update_engine[1472]: I20250129 16:15:48.859251 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:15:48.860163 update_engine[1472]: I20250129 16:15:48.859668 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:15:48.860215 update_engine[1472]: E20250129 16:15:48.860146 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:15:48.860278 update_engine[1472]: I20250129 16:15:48.860211 1472 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:15:48.860278 update_engine[1472]: I20250129 16:15:48.860230 1472 omaha_request_action.cc:617] Omaha request response: Jan 29 16:15:48.860422 update_engine[1472]: E20250129 16:15:48.860374 1472 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 16:15:48.860422 update_engine[1472]: I20250129 16:15:48.860407 1472 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 16:15:48.860422 update_engine[1472]: I20250129 16:15:48.860417 1472 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:15:48.860543 update_engine[1472]: I20250129 16:15:48.860426 1472 update_attempter.cc:306] Processing Done. Jan 29 16:15:48.860543 update_engine[1472]: E20250129 16:15:48.860447 1472 update_attempter.cc:619] Update failed. Jan 29 16:15:48.860543 update_engine[1472]: I20250129 16:15:48.860458 1472 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 16:15:48.860543 update_engine[1472]: I20250129 16:15:48.860469 1472 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 16:15:48.860543 update_engine[1472]: I20250129 16:15:48.860481 1472 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 16:15:48.860694 update_engine[1472]: I20250129 16:15:48.860596 1472 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:15:48.860726 update_engine[1472]: I20250129 16:15:48.860697 1472 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:15:48.860726 update_engine[1472]: I20250129 16:15:48.860718 1472 omaha_request_action.cc:272] Request: Jan 29 16:15:48.860726 update_engine[1472]: Jan 29 16:15:48.860726 update_engine[1472]: Jan 29 16:15:48.860726 update_engine[1472]: Jan 29 16:15:48.860726 update_engine[1472]: Jan 29 16:15:48.860726 update_engine[1472]: Jan 29 16:15:48.860726 update_engine[1472]: Jan 29 16:15:48.860923 update_engine[1472]: I20250129 16:15:48.860729 1472 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:15:48.861116 update_engine[1472]: I20250129 16:15:48.861016 1472 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:15:48.861364 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 16:15:48.861684 update_engine[1472]: I20250129 16:15:48.861317 1472 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:15:48.861824 update_engine[1472]: E20250129 16:15:48.861722 1472 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:15:48.861870 update_engine[1472]: I20250129 16:15:48.861824 1472 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:15:48.861870 update_engine[1472]: I20250129 16:15:48.861845 1472 omaha_request_action.cc:617] Omaha request response: Jan 29 16:15:48.861870 update_engine[1472]: I20250129 16:15:48.861857 1472 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:15:48.861957 update_engine[1472]: I20250129 16:15:48.861868 1472 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:15:48.861957 update_engine[1472]: I20250129 16:15:48.861879 1472 update_attempter.cc:306] Processing Done. Jan 29 16:15:48.861957 update_engine[1472]: I20250129 16:15:48.861892 1472 update_attempter.cc:310] Error event sent. Jan 29 16:15:48.861957 update_engine[1472]: I20250129 16:15:48.861908 1472 update_check_scheduler.cc:74] Next update check in 42m17s Jan 29 16:15:48.862370 locksmithd[1502]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 16:15:49.167658 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 53014 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:49.170046 sshd-session[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:49.176168 systemd-logind[1471]: New session 19 of user core. Jan 29 16:15:49.182692 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:15:49.917258 sshd[4287]: Connection closed by 139.178.68.195 port 53014 Jan 29 16:15:49.919082 sshd-session[4283]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:49.924876 systemd[1]: sshd@21-91.107.217.81:22-139.178.68.195:53014.service: Deactivated successfully. Jan 29 16:15:49.927456 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:15:49.928808 systemd-logind[1471]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:15:49.930038 systemd-logind[1471]: Removed session 19. Jan 29 16:15:55.095685 systemd[1]: Started sshd@22-91.107.217.81:22-139.178.68.195:52110.service - OpenSSH per-connection server daemon (139.178.68.195:52110). Jan 29 16:15:56.080548 sshd[4298]: Accepted publickey for core from 139.178.68.195 port 52110 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:56.082661 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:56.088279 systemd-logind[1471]: New session 20 of user core. Jan 29 16:15:56.099774 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 16:15:56.831620 sshd[4300]: Connection closed by 139.178.68.195 port 52110 Jan 29 16:15:56.832707 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jan 29 16:15:56.837734 systemd-logind[1471]: Session 20 logged out. Waiting for processes to exit. Jan 29 16:15:56.837913 systemd[1]: sshd@22-91.107.217.81:22-139.178.68.195:52110.service: Deactivated successfully. Jan 29 16:15:56.840584 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 16:15:56.844329 systemd-logind[1471]: Removed session 20. Jan 29 16:15:57.010882 systemd[1]: Started sshd@23-91.107.217.81:22-139.178.68.195:52122.service - OpenSSH per-connection server daemon (139.178.68.195:52122). Jan 29 16:15:58.006041 sshd[4312]: Accepted publickey for core from 139.178.68.195 port 52122 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:15:58.008816 sshd-session[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:15:58.015841 systemd-logind[1471]: New session 21 of user core. Jan 29 16:15:58.023576 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 16:15:59.966717 systemd[1]: run-containerd-runc-k8s.io-59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8-runc.5tF4xQ.mount: Deactivated successfully. Jan 29 16:15:59.968904 containerd[1493]: time="2025-01-29T16:15:59.968595454Z" level=info msg="StopContainer for \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\" with timeout 30 (s)" Jan 29 16:15:59.973045 containerd[1493]: time="2025-01-29T16:15:59.972634932Z" level=info msg="Stop container \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\" with signal terminated" Jan 29 16:15:59.985150 containerd[1493]: time="2025-01-29T16:15:59.985098333Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:15:59.992243 systemd[1]: cri-containerd-051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de.scope: Deactivated successfully. Jan 29 16:15:59.997327 containerd[1493]: time="2025-01-29T16:15:59.997104925Z" level=info msg="StopContainer for \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\" with timeout 2 (s)" Jan 29 16:15:59.998107 containerd[1493]: time="2025-01-29T16:15:59.998068424Z" level=info msg="Stop container \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\" with signal terminated" Jan 29 16:16:00.011298 systemd-networkd[1391]: lxc_health: Link DOWN Jan 29 16:16:00.011307 systemd-networkd[1391]: lxc_health: Lost carrier Jan 29 16:16:00.033608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de-rootfs.mount: Deactivated successfully. Jan 29 16:16:00.036118 systemd[1]: cri-containerd-59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8.scope: Deactivated successfully. Jan 29 16:16:00.037656 systemd[1]: cri-containerd-59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8.scope: Consumed 7.964s CPU time, 124.4M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:16:00.051946 containerd[1493]: time="2025-01-29T16:16:00.051887183Z" level=info msg="shim disconnected" id=051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de namespace=k8s.io Jan 29 16:16:00.051946 containerd[1493]: time="2025-01-29T16:16:00.051941544Z" level=warning msg="cleaning up after shim disconnected" id=051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de namespace=k8s.io Jan 29 16:16:00.051946 containerd[1493]: time="2025-01-29T16:16:00.051951145Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:00.067603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8-rootfs.mount: Deactivated successfully. Jan 29 16:16:00.074200 containerd[1493]: time="2025-01-29T16:16:00.074050531Z" level=info msg="shim disconnected" id=59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8 namespace=k8s.io Jan 29 16:16:00.074200 containerd[1493]: time="2025-01-29T16:16:00.074145493Z" level=warning msg="cleaning up after shim disconnected" id=59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8 namespace=k8s.io Jan 29 16:16:00.074200 containerd[1493]: time="2025-01-29T16:16:00.074154934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:00.087148 containerd[1493]: time="2025-01-29T16:16:00.086894980Z" level=info msg="StopContainer for \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\" returns successfully" Jan 29 16:16:00.089575 containerd[1493]: time="2025-01-29T16:16:00.089263105Z" level=info msg="StopPodSandbox for \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\"" Jan 29 16:16:00.089575 containerd[1493]: time="2025-01-29T16:16:00.089379068Z" level=info msg="Container to stop \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:16:00.093539 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98-shm.mount: Deactivated successfully. Jan 29 16:16:00.101107 containerd[1493]: time="2025-01-29T16:16:00.101048373Z" level=info msg="StopContainer for \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\" returns successfully" Jan 29 16:16:00.101877 containerd[1493]: time="2025-01-29T16:16:00.101818668Z" level=info msg="StopPodSandbox for \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\"" Jan 29 16:16:00.101966 containerd[1493]: time="2025-01-29T16:16:00.101882109Z" level=info msg="Container to stop \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:16:00.101966 containerd[1493]: time="2025-01-29T16:16:00.101894069Z" level=info msg="Container to stop \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:16:00.101966 containerd[1493]: time="2025-01-29T16:16:00.101902950Z" level=info msg="Container to stop \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:16:00.101966 containerd[1493]: time="2025-01-29T16:16:00.101911150Z" level=info msg="Container to stop \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:16:00.101966 containerd[1493]: time="2025-01-29T16:16:00.101920710Z" level=info msg="Container to stop \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 16:16:00.103011 systemd[1]: cri-containerd-17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98.scope: Deactivated successfully. Jan 29 16:16:00.116655 systemd[1]: cri-containerd-9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d.scope: Deactivated successfully. Jan 29 16:16:00.147973 containerd[1493]: time="2025-01-29T16:16:00.147895278Z" level=info msg="shim disconnected" id=9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d namespace=k8s.io Jan 29 16:16:00.147973 containerd[1493]: time="2025-01-29T16:16:00.147971559Z" level=warning msg="cleaning up after shim disconnected" id=9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d namespace=k8s.io Jan 29 16:16:00.148520 containerd[1493]: time="2025-01-29T16:16:00.147985080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:00.151257 containerd[1493]: time="2025-01-29T16:16:00.151162821Z" level=info msg="shim disconnected" id=17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98 namespace=k8s.io Jan 29 16:16:00.151257 containerd[1493]: time="2025-01-29T16:16:00.151261303Z" level=warning msg="cleaning up after shim disconnected" id=17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98 namespace=k8s.io Jan 29 16:16:00.151257 containerd[1493]: time="2025-01-29T16:16:00.151274863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:00.163379 containerd[1493]: time="2025-01-29T16:16:00.163194693Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:16:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:16:00.166044 containerd[1493]: time="2025-01-29T16:16:00.165987747Z" level=info msg="TearDown network for sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" successfully" Jan 29 16:16:00.166044 containerd[1493]: time="2025-01-29T16:16:00.166029948Z" level=info msg="StopPodSandbox for \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" returns successfully" Jan 29 16:16:00.170248 containerd[1493]: time="2025-01-29T16:16:00.170101547Z" level=info msg="TearDown network for sandbox \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\" successfully" Jan 29 16:16:00.170248 containerd[1493]: time="2025-01-29T16:16:00.170140468Z" level=info msg="StopPodSandbox for \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\" returns successfully" Jan 29 16:16:00.298727 kubelet[2688]: I0129 16:16:00.295596 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6a4e58f-5e56-4521-b953-164312632cb3-clustermesh-secrets\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.298727 kubelet[2688]: I0129 16:16:00.295703 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-cgroup\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.298727 kubelet[2688]: I0129 16:16:00.295735 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-host-proc-sys-kernel\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.298727 kubelet[2688]: I0129 16:16:00.295769 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-hubble-tls\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.298727 kubelet[2688]: I0129 16:16:00.295796 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-bpf-maps\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.298727 kubelet[2688]: I0129 16:16:00.295827 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zf9zb\" (UniqueName: \"kubernetes.io/projected/1636ed8a-366c-4cef-82cb-84068f96f65d-kube-api-access-zf9zb\") pod \"1636ed8a-366c-4cef-82cb-84068f96f65d\" (UID: \"1636ed8a-366c-4cef-82cb-84068f96f65d\") " Jan 29 16:16:00.299575 kubelet[2688]: I0129 16:16:00.295855 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-lib-modules\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.299575 kubelet[2688]: I0129 16:16:00.295886 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjqvh\" (UniqueName: \"kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-kube-api-access-fjqvh\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.299575 kubelet[2688]: I0129 16:16:00.295915 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-host-proc-sys-net\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.299575 kubelet[2688]: I0129 16:16:00.295945 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1636ed8a-366c-4cef-82cb-84068f96f65d-cilium-config-path\") pod \"1636ed8a-366c-4cef-82cb-84068f96f65d\" (UID: \"1636ed8a-366c-4cef-82cb-84068f96f65d\") " Jan 29 16:16:00.299575 kubelet[2688]: I0129 16:16:00.295973 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cni-path\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.299575 kubelet[2688]: I0129 16:16:00.295999 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-run\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.299713 kubelet[2688]: I0129 16:16:00.296029 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-hostproc\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.299713 kubelet[2688]: I0129 16:16:00.296057 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-xtables-lock\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.299713 kubelet[2688]: I0129 16:16:00.296084 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-etc-cni-netd\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.299713 kubelet[2688]: I0129 16:16:00.296114 2688 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-config-path\") pod \"f6a4e58f-5e56-4521-b953-164312632cb3\" (UID: \"f6a4e58f-5e56-4521-b953-164312632cb3\") " Jan 29 16:16:00.300262 kubelet[2688]: I0129 16:16:00.300150 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6a4e58f-5e56-4521-b953-164312632cb3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 16:16:00.300384 kubelet[2688]: I0129 16:16:00.300211 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.300511 kubelet[2688]: I0129 16:16:00.300486 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.301302 kubelet[2688]: I0129 16:16:00.301276 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.301788 kubelet[2688]: I0129 16:16:00.301751 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.304606 kubelet[2688]: I0129 16:16:00.304541 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:16:00.304726 kubelet[2688]: I0129 16:16:00.304627 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.304726 kubelet[2688]: I0129 16:16:00.304646 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.307220 kubelet[2688]: I0129 16:16:00.307190 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cni-path" (OuterVolumeSpecName: "cni-path") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.307710 kubelet[2688]: I0129 16:16:00.307156 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-hostproc" (OuterVolumeSpecName: "hostproc") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.307812 kubelet[2688]: I0129 16:16:00.307543 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.307874 kubelet[2688]: I0129 16:16:00.307566 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 16:16:00.308027 kubelet[2688]: I0129 16:16:00.308007 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-kube-api-access-fjqvh" (OuterVolumeSpecName: "kube-api-access-fjqvh") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "kube-api-access-fjqvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:00.308193 kubelet[2688]: I0129 16:16:00.308159 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1636ed8a-366c-4cef-82cb-84068f96f65d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1636ed8a-366c-4cef-82cb-84068f96f65d" (UID: "1636ed8a-366c-4cef-82cb-84068f96f65d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 16:16:00.308404 kubelet[2688]: I0129 16:16:00.308126 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f6a4e58f-5e56-4521-b953-164312632cb3" (UID: "f6a4e58f-5e56-4521-b953-164312632cb3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:00.308938 kubelet[2688]: I0129 16:16:00.308914 2688 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1636ed8a-366c-4cef-82cb-84068f96f65d-kube-api-access-zf9zb" (OuterVolumeSpecName: "kube-api-access-zf9zb") pod "1636ed8a-366c-4cef-82cb-84068f96f65d" (UID: "1636ed8a-366c-4cef-82cb-84068f96f65d"). InnerVolumeSpecName "kube-api-access-zf9zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 16:16:00.397315 kubelet[2688]: I0129 16:16:00.397261 2688 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fjqvh\" (UniqueName: \"kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-kube-api-access-fjqvh\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397315 kubelet[2688]: I0129 16:16:00.397318 2688 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-host-proc-sys-net\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397559 kubelet[2688]: I0129 16:16:00.397351 2688 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1636ed8a-366c-4cef-82cb-84068f96f65d-cilium-config-path\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397559 kubelet[2688]: I0129 16:16:00.397369 2688 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-xtables-lock\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397559 kubelet[2688]: I0129 16:16:00.397385 2688 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-etc-cni-netd\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397559 kubelet[2688]: I0129 16:16:00.397400 2688 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cni-path\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397559 kubelet[2688]: I0129 16:16:00.397415 2688 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-run\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397559 kubelet[2688]: I0129 16:16:00.397429 2688 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-hostproc\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397559 kubelet[2688]: I0129 16:16:00.397445 2688 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-config-path\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397559 kubelet[2688]: I0129 16:16:00.397499 2688 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6a4e58f-5e56-4521-b953-164312632cb3-clustermesh-secrets\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397811 kubelet[2688]: I0129 16:16:00.397516 2688 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-bpf-maps\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397811 kubelet[2688]: I0129 16:16:00.397531 2688 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zf9zb\" (UniqueName: \"kubernetes.io/projected/1636ed8a-366c-4cef-82cb-84068f96f65d-kube-api-access-zf9zb\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397811 kubelet[2688]: I0129 16:16:00.397547 2688 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-lib-modules\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397811 kubelet[2688]: I0129 16:16:00.397561 2688 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-cilium-cgroup\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397811 kubelet[2688]: I0129 16:16:00.397575 2688 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6a4e58f-5e56-4521-b953-164312632cb3-host-proc-sys-kernel\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.397811 kubelet[2688]: I0129 16:16:00.397590 2688 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6a4e58f-5e56-4521-b953-164312632cb3-hubble-tls\") on node \"ci-4230-0-0-0-1a94fc8352\" DevicePath \"\"" Jan 29 16:16:00.959094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d-rootfs.mount: Deactivated successfully. Jan 29 16:16:00.959226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d-shm.mount: Deactivated successfully. Jan 29 16:16:00.959300 systemd[1]: var-lib-kubelet-pods-f6a4e58f\x2d5e56\x2d4521\x2db953\x2d164312632cb3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfjqvh.mount: Deactivated successfully. Jan 29 16:16:00.959415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98-rootfs.mount: Deactivated successfully. Jan 29 16:16:00.959503 systemd[1]: var-lib-kubelet-pods-1636ed8a\x2d366c\x2d4cef\x2d82cb\x2d84068f96f65d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzf9zb.mount: Deactivated successfully. Jan 29 16:16:00.959576 systemd[1]: var-lib-kubelet-pods-f6a4e58f\x2d5e56\x2d4521\x2db953\x2d164312632cb3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 16:16:00.959637 systemd[1]: var-lib-kubelet-pods-f6a4e58f\x2d5e56\x2d4521\x2db953\x2d164312632cb3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 16:16:00.981689 kubelet[2688]: I0129 16:16:00.981504 2688 scope.go:117] "RemoveContainer" containerID="051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de" Jan 29 16:16:00.985934 containerd[1493]: time="2025-01-29T16:16:00.985658181Z" level=info msg="RemoveContainer for \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\"" Jan 29 16:16:00.988184 systemd[1]: Removed slice kubepods-besteffort-pod1636ed8a_366c_4cef_82cb_84068f96f65d.slice - libcontainer container kubepods-besteffort-pod1636ed8a_366c_4cef_82cb_84068f96f65d.slice. Jan 29 16:16:00.994405 containerd[1493]: time="2025-01-29T16:16:00.993070644Z" level=info msg="RemoveContainer for \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\" returns successfully" Jan 29 16:16:00.997422 kubelet[2688]: I0129 16:16:00.996271 2688 scope.go:117] "RemoveContainer" containerID="051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de" Jan 29 16:16:00.997738 containerd[1493]: time="2025-01-29T16:16:00.996945479Z" level=error msg="ContainerStatus for \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\": not found" Jan 29 16:16:00.996438 systemd[1]: Removed slice kubepods-burstable-podf6a4e58f_5e56_4521_b953_164312632cb3.slice - libcontainer container kubepods-burstable-podf6a4e58f_5e56_4521_b953_164312632cb3.slice. Jan 29 16:16:00.997927 kubelet[2688]: E0129 16:16:00.997733 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\": not found" containerID="051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de" Jan 29 16:16:00.997927 kubelet[2688]: I0129 16:16:00.997769 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de"} err="failed to get container status \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\": rpc error: code = NotFound desc = an error occurred when try to find container \"051a7d3e967221c7a8d2b793388ec4aceff08bc9fe53ff7d4f19686ead6893de\": not found" Jan 29 16:16:00.997927 kubelet[2688]: I0129 16:16:00.997848 2688 scope.go:117] "RemoveContainer" containerID="59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8" Jan 29 16:16:00.996549 systemd[1]: kubepods-burstable-podf6a4e58f_5e56_4521_b953_164312632cb3.slice: Consumed 8.054s CPU time, 124.8M memory peak, 144K read from disk, 12.9M written to disk. Jan 29 16:16:01.001567 containerd[1493]: time="2025-01-29T16:16:01.001071478Z" level=info msg="RemoveContainer for \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\"" Jan 29 16:16:01.004661 containerd[1493]: time="2025-01-29T16:16:01.004626307Z" level=info msg="RemoveContainer for \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\" returns successfully" Jan 29 16:16:01.004966 kubelet[2688]: I0129 16:16:01.004841 2688 scope.go:117] "RemoveContainer" containerID="50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a" Jan 29 16:16:01.006561 containerd[1493]: time="2025-01-29T16:16:01.006197737Z" level=info msg="RemoveContainer for \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\"" Jan 29 16:16:01.011515 containerd[1493]: time="2025-01-29T16:16:01.011131033Z" level=info msg="RemoveContainer for \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\" returns successfully" Jan 29 16:16:01.012219 kubelet[2688]: I0129 16:16:01.012191 2688 scope.go:117] "RemoveContainer" containerID="676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4" Jan 29 16:16:01.014163 containerd[1493]: time="2025-01-29T16:16:01.014045649Z" level=info msg="RemoveContainer for \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\"" Jan 29 16:16:01.019973 containerd[1493]: time="2025-01-29T16:16:01.019927763Z" level=info msg="RemoveContainer for \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\" returns successfully" Jan 29 16:16:01.020627 kubelet[2688]: I0129 16:16:01.020510 2688 scope.go:117] "RemoveContainer" containerID="3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331" Jan 29 16:16:01.022203 containerd[1493]: time="2025-01-29T16:16:01.021937602Z" level=info msg="RemoveContainer for \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\"" Jan 29 16:16:01.026554 containerd[1493]: time="2025-01-29T16:16:01.026136483Z" level=info msg="RemoveContainer for \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\" returns successfully" Jan 29 16:16:01.026653 kubelet[2688]: I0129 16:16:01.026345 2688 scope.go:117] "RemoveContainer" containerID="4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75" Jan 29 16:16:01.029306 containerd[1493]: time="2025-01-29T16:16:01.029269703Z" level=info msg="RemoveContainer for \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\"" Jan 29 16:16:01.031779 kubelet[2688]: I0129 16:16:01.031741 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1636ed8a-366c-4cef-82cb-84068f96f65d" path="/var/lib/kubelet/pods/1636ed8a-366c-4cef-82cb-84068f96f65d/volumes" Jan 29 16:16:01.034220 containerd[1493]: time="2025-01-29T16:16:01.034092156Z" level=info msg="RemoveContainer for \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\" returns successfully" Jan 29 16:16:01.034391 kubelet[2688]: I0129 16:16:01.034292 2688 scope.go:117] "RemoveContainer" containerID="59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8" Jan 29 16:16:01.035472 containerd[1493]: time="2025-01-29T16:16:01.035413302Z" level=error msg="ContainerStatus for \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\": not found" Jan 29 16:16:01.035832 kubelet[2688]: E0129 16:16:01.035747 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\": not found" containerID="59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8" Jan 29 16:16:01.035902 kubelet[2688]: I0129 16:16:01.035839 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8"} err="failed to get container status \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\": rpc error: code = NotFound desc = an error occurred when try to find container \"59e8d19569d2aef816391ac498db7b0eb7456829f53057a604b2d3bf2451aea8\": not found" Jan 29 16:16:01.035902 kubelet[2688]: I0129 16:16:01.035869 2688 scope.go:117] "RemoveContainer" containerID="50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a" Jan 29 16:16:01.037121 containerd[1493]: time="2025-01-29T16:16:01.036701727Z" level=error msg="ContainerStatus for \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\": not found" Jan 29 16:16:01.037601 kubelet[2688]: E0129 16:16:01.037202 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\": not found" containerID="50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a" Jan 29 16:16:01.037601 kubelet[2688]: I0129 16:16:01.037236 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a"} err="failed to get container status \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\": rpc error: code = NotFound desc = an error occurred when try to find container \"50124f5cc8d7fa834f736c21e700aa4ab3da17444b949f7ae247de3988d3d64a\": not found" Jan 29 16:16:01.037601 kubelet[2688]: I0129 16:16:01.037254 2688 scope.go:117] "RemoveContainer" containerID="676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4" Jan 29 16:16:01.037959 containerd[1493]: time="2025-01-29T16:16:01.037819068Z" level=error msg="ContainerStatus for \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\": not found" Jan 29 16:16:01.038016 kubelet[2688]: E0129 16:16:01.037931 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\": not found" containerID="676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4" Jan 29 16:16:01.038194 kubelet[2688]: I0129 16:16:01.038077 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4"} err="failed to get container status \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"676cbd4599db734ef4d5c1848b8156f6f974a16fb279c85ba08aa71ad39674f4\": not found" Jan 29 16:16:01.038194 kubelet[2688]: I0129 16:16:01.038100 2688 scope.go:117] "RemoveContainer" containerID="3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331" Jan 29 16:16:01.038624 containerd[1493]: time="2025-01-29T16:16:01.038542682Z" level=error msg="ContainerStatus for \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\": not found" Jan 29 16:16:01.038785 kubelet[2688]: E0129 16:16:01.038762 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\": not found" containerID="3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331" Jan 29 16:16:01.038838 kubelet[2688]: I0129 16:16:01.038790 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331"} err="failed to get container status \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e54843b2d3a40a39bbb5f11987b485bdd6b8b498499b8947d3a5ccfec4d6331\": not found" Jan 29 16:16:01.038838 kubelet[2688]: I0129 16:16:01.038806 2688 scope.go:117] "RemoveContainer" containerID="4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75" Jan 29 16:16:01.039059 containerd[1493]: time="2025-01-29T16:16:01.039012411Z" level=error msg="ContainerStatus for \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\": not found" Jan 29 16:16:01.039226 kubelet[2688]: E0129 16:16:01.039149 2688 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\": not found" containerID="4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75" Jan 29 16:16:01.039226 kubelet[2688]: I0129 16:16:01.039172 2688 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75"} err="failed to get container status \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\": rpc error: code = NotFound desc = an error occurred when try to find container \"4eabb1a534906e9e7a10d96008f75252eb40a14df24aee0ff163048b9cf94a75\": not found" Jan 29 16:16:01.204431 kubelet[2688]: E0129 16:16:01.204304 2688 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:16:02.039767 sshd[4314]: Connection closed by 139.178.68.195 port 52122 Jan 29 16:16:02.040947 sshd-session[4312]: pam_unix(sshd:session): session closed for user core Jan 29 16:16:02.047688 systemd[1]: sshd@23-91.107.217.81:22-139.178.68.195:52122.service: Deactivated successfully. Jan 29 16:16:02.050777 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 16:16:02.052930 systemd-logind[1471]: Session 21 logged out. Waiting for processes to exit. Jan 29 16:16:02.054511 systemd-logind[1471]: Removed session 21. Jan 29 16:16:02.214774 systemd[1]: Started sshd@24-91.107.217.81:22-139.178.68.195:52124.service - OpenSSH per-connection server daemon (139.178.68.195:52124). Jan 29 16:16:03.030401 kubelet[2688]: I0129 16:16:03.029551 2688 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6a4e58f-5e56-4521-b953-164312632cb3" path="/var/lib/kubelet/pods/f6a4e58f-5e56-4521-b953-164312632cb3/volumes" Jan 29 16:16:03.199233 sshd[4484]: Accepted publickey for core from 139.178.68.195 port 52124 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:16:03.201762 sshd-session[4484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:16:03.209950 systemd-logind[1471]: New session 22 of user core. Jan 29 16:16:03.215581 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 16:16:04.810273 kubelet[2688]: E0129 16:16:04.807799 2688 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6a4e58f-5e56-4521-b953-164312632cb3" containerName="mount-cgroup" Jan 29 16:16:04.810273 kubelet[2688]: E0129 16:16:04.807840 2688 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6a4e58f-5e56-4521-b953-164312632cb3" containerName="apply-sysctl-overwrites" Jan 29 16:16:04.810273 kubelet[2688]: E0129 16:16:04.807848 2688 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6a4e58f-5e56-4521-b953-164312632cb3" containerName="cilium-agent" Jan 29 16:16:04.810273 kubelet[2688]: E0129 16:16:04.807854 2688 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1636ed8a-366c-4cef-82cb-84068f96f65d" containerName="cilium-operator" Jan 29 16:16:04.810273 kubelet[2688]: E0129 16:16:04.807860 2688 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6a4e58f-5e56-4521-b953-164312632cb3" containerName="mount-bpf-fs" Jan 29 16:16:04.810273 kubelet[2688]: E0129 16:16:04.807865 2688 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6a4e58f-5e56-4521-b953-164312632cb3" containerName="clean-cilium-state" Jan 29 16:16:04.810273 kubelet[2688]: I0129 16:16:04.807891 2688 memory_manager.go:354] "RemoveStaleState removing state" podUID="1636ed8a-366c-4cef-82cb-84068f96f65d" containerName="cilium-operator" Jan 29 16:16:04.810273 kubelet[2688]: I0129 16:16:04.807897 2688 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6a4e58f-5e56-4521-b953-164312632cb3" containerName="cilium-agent" Jan 29 16:16:04.821463 systemd[1]: Created slice kubepods-burstable-podd8ef2d65_0cf5_4e9d_bedd_33b18a4714a0.slice - libcontainer container kubepods-burstable-podd8ef2d65_0cf5_4e9d_bedd_33b18a4714a0.slice. Jan 29 16:16:04.929689 kubelet[2688]: I0129 16:16:04.929197 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-bpf-maps\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.929689 kubelet[2688]: I0129 16:16:04.929282 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-lib-modules\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.929689 kubelet[2688]: I0129 16:16:04.929318 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxt5c\" (UniqueName: \"kubernetes.io/projected/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-kube-api-access-mxt5c\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.929689 kubelet[2688]: I0129 16:16:04.929380 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-etc-cni-netd\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.929689 kubelet[2688]: I0129 16:16:04.929416 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-cilium-config-path\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.929689 kubelet[2688]: I0129 16:16:04.929446 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-hostproc\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930102 kubelet[2688]: I0129 16:16:04.929475 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-cilium-cgroup\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930775 kubelet[2688]: I0129 16:16:04.930302 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-xtables-lock\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930775 kubelet[2688]: I0129 16:16:04.930396 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-host-proc-sys-net\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930775 kubelet[2688]: I0129 16:16:04.930430 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-cilium-run\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930775 kubelet[2688]: I0129 16:16:04.930514 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-cni-path\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930775 kubelet[2688]: I0129 16:16:04.930581 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-clustermesh-secrets\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930775 kubelet[2688]: I0129 16:16:04.930662 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-hubble-tls\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930986 kubelet[2688]: I0129 16:16:04.930700 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-host-proc-sys-kernel\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.930986 kubelet[2688]: I0129 16:16:04.930768 2688 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0-cilium-ipsec-secrets\") pod \"cilium-gxcfh\" (UID: \"d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0\") " pod="kube-system/cilium-gxcfh" Jan 29 16:16:04.954029 sshd[4486]: Connection closed by 139.178.68.195 port 52124 Jan 29 16:16:04.956089 sshd-session[4484]: pam_unix(sshd:session): session closed for user core Jan 29 16:16:04.961036 systemd[1]: sshd@24-91.107.217.81:22-139.178.68.195:52124.service: Deactivated successfully. Jan 29 16:16:04.963692 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 16:16:04.965465 systemd-logind[1471]: Session 22 logged out. Waiting for processes to exit. Jan 29 16:16:04.967257 systemd-logind[1471]: Removed session 22. Jan 29 16:16:05.128194 systemd[1]: Started sshd@25-91.107.217.81:22-139.178.68.195:60476.service - OpenSSH per-connection server daemon (139.178.68.195:60476). Jan 29 16:16:05.130911 containerd[1493]: time="2025-01-29T16:16:05.130849292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxcfh,Uid:d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0,Namespace:kube-system,Attempt:0,}" Jan 29 16:16:05.166877 containerd[1493]: time="2025-01-29T16:16:05.166768986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:16:05.167572 containerd[1493]: time="2025-01-29T16:16:05.167216595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:16:05.167572 containerd[1493]: time="2025-01-29T16:16:05.167243395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:16:05.167572 containerd[1493]: time="2025-01-29T16:16:05.167368558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:16:05.188750 systemd[1]: Started cri-containerd-96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b.scope - libcontainer container 96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b. Jan 29 16:16:05.215999 containerd[1493]: time="2025-01-29T16:16:05.215953496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gxcfh,Uid:d8ef2d65-0cf5-4e9d-bedd-33b18a4714a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\"" Jan 29 16:16:05.221247 containerd[1493]: time="2025-01-29T16:16:05.221200877Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 16:16:05.237960 containerd[1493]: time="2025-01-29T16:16:05.237903680Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1dfb0bc130aa20d957f0af71e2d2780a3592d1d7cf0b0d7f289cecacd81c6001\"" Jan 29 16:16:05.239082 containerd[1493]: time="2025-01-29T16:16:05.239009301Z" level=info msg="StartContainer for \"1dfb0bc130aa20d957f0af71e2d2780a3592d1d7cf0b0d7f289cecacd81c6001\"" Jan 29 16:16:05.276744 systemd[1]: Started cri-containerd-1dfb0bc130aa20d957f0af71e2d2780a3592d1d7cf0b0d7f289cecacd81c6001.scope - libcontainer container 1dfb0bc130aa20d957f0af71e2d2780a3592d1d7cf0b0d7f289cecacd81c6001. Jan 29 16:16:05.310392 containerd[1493]: time="2025-01-29T16:16:05.310315999Z" level=info msg="StartContainer for \"1dfb0bc130aa20d957f0af71e2d2780a3592d1d7cf0b0d7f289cecacd81c6001\" returns successfully" Jan 29 16:16:05.326273 systemd[1]: cri-containerd-1dfb0bc130aa20d957f0af71e2d2780a3592d1d7cf0b0d7f289cecacd81c6001.scope: Deactivated successfully. Jan 29 16:16:05.366868 containerd[1493]: time="2025-01-29T16:16:05.366145397Z" level=info msg="shim disconnected" id=1dfb0bc130aa20d957f0af71e2d2780a3592d1d7cf0b0d7f289cecacd81c6001 namespace=k8s.io Jan 29 16:16:05.366868 containerd[1493]: time="2025-01-29T16:16:05.366235359Z" level=warning msg="cleaning up after shim disconnected" id=1dfb0bc130aa20d957f0af71e2d2780a3592d1d7cf0b0d7f289cecacd81c6001 namespace=k8s.io Jan 29 16:16:05.366868 containerd[1493]: time="2025-01-29T16:16:05.366247719Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:06.008405 containerd[1493]: time="2025-01-29T16:16:06.008181639Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 16:16:06.024331 containerd[1493]: time="2025-01-29T16:16:06.024263110Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f\"" Jan 29 16:16:06.026178 containerd[1493]: time="2025-01-29T16:16:06.026138226Z" level=info msg="StartContainer for \"d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f\"" Jan 29 16:16:06.059601 systemd[1]: Started cri-containerd-d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f.scope - libcontainer container d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f. Jan 29 16:16:06.086212 containerd[1493]: time="2025-01-29T16:16:06.086166226Z" level=info msg="StartContainer for \"d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f\" returns successfully" Jan 29 16:16:06.095166 systemd[1]: cri-containerd-d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f.scope: Deactivated successfully. Jan 29 16:16:06.118032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f-rootfs.mount: Deactivated successfully. Jan 29 16:16:06.122449 containerd[1493]: time="2025-01-29T16:16:06.122326084Z" level=info msg="shim disconnected" id=d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f namespace=k8s.io Jan 29 16:16:06.122449 containerd[1493]: time="2025-01-29T16:16:06.122397326Z" level=warning msg="cleaning up after shim disconnected" id=d59a9a8612d7b6b06ea6d8b2dc3f6fc80a4ab85def4478e3e0d6d32705832b6f namespace=k8s.io Jan 29 16:16:06.122449 containerd[1493]: time="2025-01-29T16:16:06.122407406Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:06.130273 sshd[4502]: Accepted publickey for core from 139.178.68.195 port 60476 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:16:06.132151 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:16:06.138477 systemd-logind[1471]: New session 23 of user core. Jan 29 16:16:06.141598 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 16:16:06.205658 kubelet[2688]: E0129 16:16:06.205413 2688 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 16:16:06.808141 sshd[4671]: Connection closed by 139.178.68.195 port 60476 Jan 29 16:16:06.808759 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Jan 29 16:16:06.813316 systemd[1]: sshd@25-91.107.217.81:22-139.178.68.195:60476.service: Deactivated successfully. Jan 29 16:16:06.815924 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 16:16:06.817820 systemd-logind[1471]: Session 23 logged out. Waiting for processes to exit. Jan 29 16:16:06.819240 systemd-logind[1471]: Removed session 23. Jan 29 16:16:06.990776 systemd[1]: Started sshd@26-91.107.217.81:22-139.178.68.195:60480.service - OpenSSH per-connection server daemon (139.178.68.195:60480). Jan 29 16:16:07.015687 containerd[1493]: time="2025-01-29T16:16:07.015496137Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 16:16:07.039181 containerd[1493]: time="2025-01-29T16:16:07.037992372Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d\"" Jan 29 16:16:07.040498 containerd[1493]: time="2025-01-29T16:16:07.040401778Z" level=info msg="StartContainer for \"438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d\"" Jan 29 16:16:07.075643 systemd[1]: run-containerd-runc-k8s.io-438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d-runc.wq9NvD.mount: Deactivated successfully. Jan 29 16:16:07.084605 systemd[1]: Started cri-containerd-438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d.scope - libcontainer container 438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d. Jan 29 16:16:07.124017 containerd[1493]: time="2025-01-29T16:16:07.123966233Z" level=info msg="StartContainer for \"438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d\" returns successfully" Jan 29 16:16:07.127462 systemd[1]: cri-containerd-438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d.scope: Deactivated successfully. Jan 29 16:16:07.152127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d-rootfs.mount: Deactivated successfully. Jan 29 16:16:07.160396 containerd[1493]: time="2025-01-29T16:16:07.160264574Z" level=info msg="shim disconnected" id=438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d namespace=k8s.io Jan 29 16:16:07.160396 containerd[1493]: time="2025-01-29T16:16:07.160362936Z" level=warning msg="cleaning up after shim disconnected" id=438cb8632de5f721777747512fd2f0cd1abd0aafc7eaa032ff8568197b20784d namespace=k8s.io Jan 29 16:16:07.160396 containerd[1493]: time="2025-01-29T16:16:07.160375136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:07.679094 kubelet[2688]: I0129 16:16:07.679006 2688 setters.go:600] "Node became not ready" node="ci-4230-0-0-0-1a94fc8352" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T16:16:07Z","lastTransitionTime":"2025-01-29T16:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 16:16:07.987785 sshd[4678]: Accepted publickey for core from 139.178.68.195 port 60480 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:16:07.989666 sshd-session[4678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:16:07.996603 systemd-logind[1471]: New session 24 of user core. Jan 29 16:16:08.001567 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 16:16:08.019087 containerd[1493]: time="2025-01-29T16:16:08.018893520Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 16:16:08.032416 containerd[1493]: time="2025-01-29T16:16:08.032366740Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7\"" Jan 29 16:16:08.033307 containerd[1493]: time="2025-01-29T16:16:08.033045593Z" level=info msg="StartContainer for \"5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7\"" Jan 29 16:16:08.075654 systemd[1]: Started cri-containerd-5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7.scope - libcontainer container 5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7. Jan 29 16:16:08.102360 systemd[1]: cri-containerd-5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7.scope: Deactivated successfully. Jan 29 16:16:08.104982 containerd[1493]: time="2025-01-29T16:16:08.104866260Z" level=info msg="StartContainer for \"5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7\" returns successfully" Jan 29 16:16:08.122715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7-rootfs.mount: Deactivated successfully. Jan 29 16:16:08.127996 containerd[1493]: time="2025-01-29T16:16:08.127811584Z" level=info msg="shim disconnected" id=5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7 namespace=k8s.io Jan 29 16:16:08.127996 containerd[1493]: time="2025-01-29T16:16:08.127874505Z" level=warning msg="cleaning up after shim disconnected" id=5e0c358a5118e1812ff1b3094a513aac4991250731aa0ddfe5bb4454ce6b67d7 namespace=k8s.io Jan 29 16:16:08.127996 containerd[1493]: time="2025-01-29T16:16:08.127882665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:09.028581 containerd[1493]: time="2025-01-29T16:16:09.028431461Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 16:16:09.050131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190318522.mount: Deactivated successfully. Jan 29 16:16:09.058659 containerd[1493]: time="2025-01-29T16:16:09.058610724Z" level=info msg="CreateContainer within sandbox \"96cc460f88829fa89b653adfb07910b705da2fbb6c0d4f2a56d9918b006b474b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8b3e10caa332a409d1753f7b9466bc7b1e25cf8b18776e306a5cd90005d8f54b\"" Jan 29 16:16:09.059745 containerd[1493]: time="2025-01-29T16:16:09.059658784Z" level=info msg="StartContainer for \"8b3e10caa332a409d1753f7b9466bc7b1e25cf8b18776e306a5cd90005d8f54b\"" Jan 29 16:16:09.093645 systemd[1]: Started cri-containerd-8b3e10caa332a409d1753f7b9466bc7b1e25cf8b18776e306a5cd90005d8f54b.scope - libcontainer container 8b3e10caa332a409d1753f7b9466bc7b1e25cf8b18776e306a5cd90005d8f54b. Jan 29 16:16:09.127924 containerd[1493]: time="2025-01-29T16:16:09.127857981Z" level=info msg="StartContainer for \"8b3e10caa332a409d1753f7b9466bc7b1e25cf8b18776e306a5cd90005d8f54b\" returns successfully" Jan 29 16:16:09.422444 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 16:16:10.056061 kubelet[2688]: I0129 16:16:10.055181 2688 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gxcfh" podStartSLOduration=6.055163734 podStartE2EDuration="6.055163734s" podCreationTimestamp="2025-01-29 16:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:16:10.051981792 +0000 UTC m=+359.138679905" watchObservedRunningTime="2025-01-29 16:16:10.055163734 +0000 UTC m=+359.141861807" Jan 29 16:16:10.746822 kubelet[2688]: E0129 16:16:10.746752 2688 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:37854->127.0.0.1:40709: write tcp 127.0.0.1:37854->127.0.0.1:40709: write: broken pipe Jan 29 16:16:11.048290 containerd[1493]: time="2025-01-29T16:16:11.047396821Z" level=info msg="StopPodSandbox for \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\"" Jan 29 16:16:11.048290 containerd[1493]: time="2025-01-29T16:16:11.047500423Z" level=info msg="TearDown network for sandbox \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\" successfully" Jan 29 16:16:11.048290 containerd[1493]: time="2025-01-29T16:16:11.047510903Z" level=info msg="StopPodSandbox for \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\" returns successfully" Jan 29 16:16:11.048290 containerd[1493]: time="2025-01-29T16:16:11.048182076Z" level=info msg="RemovePodSandbox for \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\"" Jan 29 16:16:11.048290 containerd[1493]: time="2025-01-29T16:16:11.048221197Z" level=info msg="Forcibly stopping sandbox \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\"" Jan 29 16:16:11.048290 containerd[1493]: time="2025-01-29T16:16:11.048270598Z" level=info msg="TearDown network for sandbox \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\" successfully" Jan 29 16:16:11.052667 containerd[1493]: time="2025-01-29T16:16:11.052595121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:16:11.052667 containerd[1493]: time="2025-01-29T16:16:11.052670123Z" level=info msg="RemovePodSandbox \"17f5e3e210a0f0a65f6abe87f683419964ffe022661f545dfa1d677136374a98\" returns successfully" Jan 29 16:16:11.054707 containerd[1493]: time="2025-01-29T16:16:11.054661961Z" level=info msg="StopPodSandbox for \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\"" Jan 29 16:16:11.054817 containerd[1493]: time="2025-01-29T16:16:11.054754443Z" level=info msg="TearDown network for sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" successfully" Jan 29 16:16:11.054817 containerd[1493]: time="2025-01-29T16:16:11.054765363Z" level=info msg="StopPodSandbox for \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" returns successfully" Jan 29 16:16:11.055840 containerd[1493]: time="2025-01-29T16:16:11.055808103Z" level=info msg="RemovePodSandbox for \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\"" Jan 29 16:16:11.057208 containerd[1493]: time="2025-01-29T16:16:11.055944506Z" level=info msg="Forcibly stopping sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\"" Jan 29 16:16:11.057208 containerd[1493]: time="2025-01-29T16:16:11.056003867Z" level=info msg="TearDown network for sandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" successfully" Jan 29 16:16:11.059154 containerd[1493]: time="2025-01-29T16:16:11.059117007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:16:11.059310 containerd[1493]: time="2025-01-29T16:16:11.059290690Z" level=info msg="RemovePodSandbox \"9aaf63fe585a4d645a7b2addb7b32ce0ae1977fa0821184647447a361fc35d7d\" returns successfully" Jan 29 16:16:12.348094 systemd-networkd[1391]: lxc_health: Link UP Jan 29 16:16:12.371962 systemd-networkd[1391]: lxc_health: Gained carrier Jan 29 16:16:14.252693 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 29 16:16:19.550970 sshd[4735]: Connection closed by 139.178.68.195 port 60480 Jan 29 16:16:19.551970 sshd-session[4678]: pam_unix(sshd:session): session closed for user core Jan 29 16:16:19.558001 systemd[1]: sshd@26-91.107.217.81:22-139.178.68.195:60480.service: Deactivated successfully. Jan 29 16:16:19.560187 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 16:16:19.561393 systemd-logind[1471]: Session 24 logged out. Waiting for processes to exit. Jan 29 16:16:19.562480 systemd-logind[1471]: Removed session 24. Jan 29 16:16:35.448154 systemd[1]: cri-containerd-23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99.scope: Deactivated successfully. Jan 29 16:16:35.451289 systemd[1]: cri-containerd-23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99.scope: Consumed 6.247s CPU time, 59M memory peak. Jan 29 16:16:35.477830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99-rootfs.mount: Deactivated successfully. Jan 29 16:16:35.494751 containerd[1493]: time="2025-01-29T16:16:35.494632145Z" level=info msg="shim disconnected" id=23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99 namespace=k8s.io Jan 29 16:16:35.494751 containerd[1493]: time="2025-01-29T16:16:35.494731067Z" level=warning msg="cleaning up after shim disconnected" id=23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99 namespace=k8s.io Jan 29 16:16:35.494751 containerd[1493]: time="2025-01-29T16:16:35.494746947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:35.759978 kubelet[2688]: E0129 16:16:35.759305 2688 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43244->10.0.0.2:2379: read: connection timed out" Jan 29 16:16:35.764610 systemd[1]: cri-containerd-f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f.scope: Deactivated successfully. Jan 29 16:16:35.766598 systemd[1]: cri-containerd-f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f.scope: Consumed 2.003s CPU time, 22.7M memory peak. Jan 29 16:16:35.786822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f-rootfs.mount: Deactivated successfully. Jan 29 16:16:35.796379 containerd[1493]: time="2025-01-29T16:16:35.796277772Z" level=info msg="shim disconnected" id=f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f namespace=k8s.io Jan 29 16:16:35.796379 containerd[1493]: time="2025-01-29T16:16:35.796372254Z" level=warning msg="cleaning up after shim disconnected" id=f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f namespace=k8s.io Jan 29 16:16:35.796379 containerd[1493]: time="2025-01-29T16:16:35.796389894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:16:35.811614 containerd[1493]: time="2025-01-29T16:16:35.810451286Z" level=warning msg="cleanup warnings time=\"2025-01-29T16:16:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 16:16:36.099870 kubelet[2688]: I0129 16:16:36.099426 2688 scope.go:117] "RemoveContainer" containerID="f069597f9000c076b09d7a33b880a8ca9772c4841360e19e97236bb73432041f" Jan 29 16:16:36.102482 containerd[1493]: time="2025-01-29T16:16:36.102450806Z" level=info msg="CreateContainer within sandbox \"53162de42d40e28bec820b3c293ff42a6daf3e87184cfb6bf3571e9a4ee70489\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 16:16:36.103124 kubelet[2688]: I0129 16:16:36.102982 2688 scope.go:117] "RemoveContainer" containerID="23d000931b417809b285a15df702878bac7ac515a75fcaf079c3b334cd5c2c99" Jan 29 16:16:36.105544 containerd[1493]: time="2025-01-29T16:16:36.105418624Z" level=info msg="CreateContainer within sandbox \"4d2f78fa65fcb1788b3320b10020e005418e9286f7a8d4d57a119692a05661c2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 16:16:36.119895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3265704929.mount: Deactivated successfully. Jan 29 16:16:36.128733 containerd[1493]: time="2025-01-29T16:16:36.128623192Z" level=info msg="CreateContainer within sandbox \"53162de42d40e28bec820b3c293ff42a6daf3e87184cfb6bf3571e9a4ee70489\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"92c0ed3934a0b7cf3a601dc96031b8c6c004dcceefef9763d8f8981bfcea2cb6\"" Jan 29 16:16:36.129508 containerd[1493]: time="2025-01-29T16:16:36.129399727Z" level=info msg="CreateContainer within sandbox \"4d2f78fa65fcb1788b3320b10020e005418e9286f7a8d4d57a119692a05661c2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d0f7aff0a171d010dcc6db600722d7f17a7c9ee3da5dd99180d5d2f0c9a624e1\"" Jan 29 16:16:36.131358 containerd[1493]: time="2025-01-29T16:16:36.129893417Z" level=info msg="StartContainer for \"d0f7aff0a171d010dcc6db600722d7f17a7c9ee3da5dd99180d5d2f0c9a624e1\"" Jan 29 16:16:36.131587 containerd[1493]: time="2025-01-29T16:16:36.131541968Z" level=info msg="StartContainer for \"92c0ed3934a0b7cf3a601dc96031b8c6c004dcceefef9763d8f8981bfcea2cb6\"" Jan 29 16:16:36.160881 systemd[1]: Started cri-containerd-d0f7aff0a171d010dcc6db600722d7f17a7c9ee3da5dd99180d5d2f0c9a624e1.scope - libcontainer container d0f7aff0a171d010dcc6db600722d7f17a7c9ee3da5dd99180d5d2f0c9a624e1. Jan 29 16:16:36.165771 systemd[1]: Started cri-containerd-92c0ed3934a0b7cf3a601dc96031b8c6c004dcceefef9763d8f8981bfcea2cb6.scope - libcontainer container 92c0ed3934a0b7cf3a601dc96031b8c6c004dcceefef9763d8f8981bfcea2cb6. Jan 29 16:16:36.214566 containerd[1493]: time="2025-01-29T16:16:36.214514811Z" level=info msg="StartContainer for \"d0f7aff0a171d010dcc6db600722d7f17a7c9ee3da5dd99180d5d2f0c9a624e1\" returns successfully" Jan 29 16:16:36.224631 containerd[1493]: time="2025-01-29T16:16:36.224584326Z" level=info msg="StartContainer for \"92c0ed3934a0b7cf3a601dc96031b8c6c004dcceefef9763d8f8981bfcea2cb6\" returns successfully" Jan 29 16:16:38.319125 kubelet[2688]: E0129 16:16:38.318955 2688 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:43098->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-0-0-0-1a94fc8352.181f3607b35136c4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-0-0-0-1a94fc8352,UID:87bc5600ae7e68cbf76ecc535ad2a727,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-0-1a94fc8352,},FirstTimestamp:2025-01-29 16:16:27.889415876 +0000 UTC m=+376.976114029,LastTimestamp:2025-01-29 16:16:27.889415876 +0000 UTC m=+376.976114029,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-0-1a94fc8352,}"