Jan 30 13:22:40.908943 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:22:40.908970 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:22:40.908980 kernel: KASLR enabled Jan 30 13:22:40.908987 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 30 13:22:40.908993 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 30 13:22:40.908999 kernel: random: crng init done Jan 30 13:22:40.909006 kernel: secureboot: Secure boot disabled Jan 30 13:22:40.909012 kernel: ACPI: Early table checksum verification disabled Jan 30 13:22:40.909018 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 30 13:22:40.909026 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:22:40.909033 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909039 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909045 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909051 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909059 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909067 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909073 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909080 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909086 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:22:40.909093 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:22:40.909099 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 30 13:22:40.909106 kernel: NUMA: Failed to initialise from firmware Jan 30 13:22:40.909113 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 13:22:40.909122 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 30 13:22:40.909129 kernel: Zone ranges: Jan 30 13:22:40.909138 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 13:22:40.909145 kernel: DMA32 empty Jan 30 13:22:40.909152 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 30 13:22:40.909160 kernel: Movable zone start for each node Jan 30 13:22:40.909167 kernel: Early memory node ranges Jan 30 13:22:40.909174 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 30 13:22:40.909182 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 30 13:22:40.909190 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 30 13:22:40.909197 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 30 13:22:40.909204 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 30 13:22:40.909211 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 30 13:22:40.909218 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 30 13:22:40.909226 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 30 13:22:40.909232 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 30 13:22:40.909238 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 30 13:22:40.909248 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 30 13:22:40.909256 kernel: psci: probing for conduit method from ACPI. Jan 30 13:22:40.909263 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:22:40.909271 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:22:40.909278 kernel: psci: Trusted OS migration not required Jan 30 13:22:40.909315 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:22:40.909322 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:22:40.909330 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:22:40.909337 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:22:40.909344 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 13:22:40.909351 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:22:40.909357 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:22:40.909364 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:22:40.909374 kernel: CPU features: detected: Spectre-v4 Jan 30 13:22:40.909381 kernel: CPU features: detected: Spectre-BHB Jan 30 13:22:40.909389 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:22:40.909395 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:22:40.909402 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:22:40.909409 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:22:40.909424 kernel: alternatives: applying boot alternatives Jan 30 13:22:40.909434 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:22:40.909441 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:22:40.909448 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:22:40.909455 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:22:40.912529 kernel: Fallback order for Node 0: 0 Jan 30 13:22:40.912562 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 30 13:22:40.912569 kernel: Policy zone: Normal Jan 30 13:22:40.912576 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:22:40.912593 kernel: software IO TLB: area num 2. Jan 30 13:22:40.912602 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 30 13:22:40.912611 kernel: Memory: 3882296K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 213704K reserved, 0K cma-reserved) Jan 30 13:22:40.912618 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:22:40.912625 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:22:40.912634 kernel: rcu: RCU event tracing is enabled. Jan 30 13:22:40.912641 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:22:40.912648 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:22:40.912663 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:22:40.912670 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:22:40.912678 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:22:40.912685 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:22:40.912692 kernel: GICv3: 256 SPIs implemented Jan 30 13:22:40.912699 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:22:40.912705 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:22:40.912712 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:22:40.912719 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:22:40.912726 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:22:40.912733 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:22:40.912743 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:22:40.912750 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 30 13:22:40.912757 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 30 13:22:40.912764 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:22:40.912771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:22:40.912778 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:22:40.912785 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:22:40.912792 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:22:40.912799 kernel: Console: colour dummy device 80x25 Jan 30 13:22:40.912807 kernel: ACPI: Core revision 20230628 Jan 30 13:22:40.912815 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:22:40.912824 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:22:40.912831 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:22:40.912838 kernel: landlock: Up and running. Jan 30 13:22:40.912846 kernel: SELinux: Initializing. Jan 30 13:22:40.912853 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:22:40.912860 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:22:40.912871 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:22:40.912878 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:22:40.912885 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:22:40.912895 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:22:40.912902 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:22:40.912909 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:22:40.912916 kernel: Remapping and enabling EFI services. Jan 30 13:22:40.912923 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:22:40.912930 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:22:40.912938 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:22:40.912945 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 30 13:22:40.912952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:22:40.912961 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:22:40.912968 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:22:40.912981 kernel: SMP: Total of 2 processors activated. Jan 30 13:22:40.912991 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:22:40.912999 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:22:40.913006 kernel: CPU features: detected: Common not Private translations Jan 30 13:22:40.913013 kernel: CPU features: detected: CRC32 instructions Jan 30 13:22:40.913021 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:22:40.913029 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:22:40.913038 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:22:40.913045 kernel: CPU features: detected: Privileged Access Never Jan 30 13:22:40.913053 kernel: CPU features: detected: RAS Extension Support Jan 30 13:22:40.913061 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:22:40.913068 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:22:40.913075 kernel: alternatives: applying system-wide alternatives Jan 30 13:22:40.913083 kernel: devtmpfs: initialized Jan 30 13:22:40.913090 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:22:40.913101 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:22:40.913108 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:22:40.913115 kernel: SMBIOS 3.0.0 present. Jan 30 13:22:40.913123 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 30 13:22:40.913130 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:22:40.913138 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:22:40.913146 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:22:40.913154 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:22:40.913162 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:22:40.913171 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:22:40.913178 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jan 30 13:22:40.913186 kernel: cpuidle: using governor menu Jan 30 13:22:40.913193 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:22:40.913201 kernel: ASID allocator initialised with 32768 entries Jan 30 13:22:40.913208 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:22:40.913216 kernel: Serial: AMBA PL011 UART driver Jan 30 13:22:40.913224 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:22:40.913231 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:22:40.913241 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:22:40.913249 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:22:40.913256 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:22:40.913264 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:22:40.913271 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:22:40.913279 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:22:40.913286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:22:40.913294 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:22:40.913302 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:22:40.913311 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:22:40.913318 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:22:40.913326 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:22:40.913333 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:22:40.913341 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:22:40.913348 kernel: ACPI: Interpreter enabled Jan 30 13:22:40.913356 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:22:40.913363 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:22:40.913371 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:22:40.913380 kernel: printk: console [ttyAMA0] enabled Jan 30 13:22:40.913388 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:22:40.914653 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:22:40.914765 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:22:40.914835 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:22:40.914927 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:22:40.914998 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:22:40.915015 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:22:40.915023 kernel: PCI host bridge to bus 0000:00 Jan 30 13:22:40.915110 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:22:40.915184 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:22:40.915249 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:22:40.915310 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:22:40.915399 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:22:40.916555 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 30 13:22:40.916708 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 30 13:22:40.916787 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 13:22:40.916870 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.916941 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 30 13:22:40.917021 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.917100 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 30 13:22:40.917177 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.917246 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 30 13:22:40.917322 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.917391 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 30 13:22:40.917483 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.917571 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 30 13:22:40.917664 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.917736 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 30 13:22:40.917812 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.917884 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 30 13:22:40.917965 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.918041 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 30 13:22:40.918121 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 30 13:22:40.918196 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 30 13:22:40.920699 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 30 13:22:40.920801 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 30 13:22:40.920888 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 13:22:40.920964 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 30 13:22:40.921048 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:22:40.921125 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 13:22:40.921206 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 30 13:22:40.921276 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 30 13:22:40.921356 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 30 13:22:40.921431 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 30 13:22:40.921621 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 30 13:22:40.921738 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 30 13:22:40.921819 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 30 13:22:40.921899 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 30 13:22:40.921970 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 30 13:22:40.922052 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 30 13:22:40.922144 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 30 13:22:40.922224 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 30 13:22:40.922295 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 13:22:40.922374 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 30 13:22:40.922445 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 30 13:22:40.924676 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 30 13:22:40.924782 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 30 13:22:40.924871 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 30 13:22:40.924941 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 30 13:22:40.925010 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 30 13:22:40.925084 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 30 13:22:40.925152 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 30 13:22:40.925219 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 30 13:22:40.925292 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 30 13:22:40.925363 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 30 13:22:40.925436 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 30 13:22:40.925640 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 30 13:22:40.925721 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 30 13:22:40.925794 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 30 13:22:40.925868 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 30 13:22:40.925937 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 30 13:22:40.926005 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 30 13:22:40.926088 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 13:22:40.926158 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 30 13:22:40.926226 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 30 13:22:40.926298 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 13:22:40.926368 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 30 13:22:40.926445 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 30 13:22:40.927668 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 13:22:40.927780 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 30 13:22:40.927859 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 30 13:22:40.927936 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 13:22:40.928007 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 30 13:22:40.928075 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 30 13:22:40.928149 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 30 13:22:40.928218 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 13:22:40.928295 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 30 13:22:40.928365 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 13:22:40.928435 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 30 13:22:40.930238 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 13:22:40.930347 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 30 13:22:40.930419 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 13:22:40.930507 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 30 13:22:40.930628 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 13:22:40.930712 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 30 13:22:40.930782 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 13:22:40.930855 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 30 13:22:40.930925 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 13:22:40.931014 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 30 13:22:40.931091 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 13:22:40.931187 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 30 13:22:40.931258 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 13:22:40.931335 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 30 13:22:40.931406 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 30 13:22:40.931542 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 30 13:22:40.931636 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 30 13:22:40.931713 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 30 13:22:40.931788 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 30 13:22:40.931858 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 30 13:22:40.931926 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 30 13:22:40.932009 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 30 13:22:40.932081 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 30 13:22:40.932153 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 30 13:22:40.932219 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 30 13:22:40.932291 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 30 13:22:40.932370 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 30 13:22:40.932439 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 30 13:22:40.932595 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 30 13:22:40.932680 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 30 13:22:40.932748 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 30 13:22:40.932817 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 30 13:22:40.932884 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 30 13:22:40.932956 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 30 13:22:40.933039 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 30 13:22:40.933110 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:22:40.933181 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 30 13:22:40.933251 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 30 13:22:40.933346 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 30 13:22:40.933416 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 30 13:22:40.933501 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 13:22:40.933623 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 30 13:22:40.933715 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 30 13:22:40.933784 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 30 13:22:40.933850 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 30 13:22:40.933917 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 13:22:40.933997 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 30 13:22:40.934070 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 30 13:22:40.934139 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 30 13:22:40.934207 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 30 13:22:40.934278 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 30 13:22:40.934359 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 13:22:40.934451 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 30 13:22:40.934557 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 30 13:22:40.934676 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 30 13:22:40.934750 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 30 13:22:40.934827 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 13:22:40.934906 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 30 13:22:40.934981 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 30 13:22:40.935052 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 30 13:22:40.935121 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 30 13:22:40.935188 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 30 13:22:40.935260 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 13:22:40.935338 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 30 13:22:40.935411 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 30 13:22:40.935978 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 30 13:22:40.936082 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 30 13:22:40.936153 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 30 13:22:40.936228 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 13:22:40.936312 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 30 13:22:40.936389 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 30 13:22:40.936457 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 30 13:22:40.936542 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 30 13:22:40.936628 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 30 13:22:40.936697 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 30 13:22:40.936763 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 13:22:40.936835 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 30 13:22:40.937392 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 30 13:22:40.937550 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 30 13:22:40.937648 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 13:22:40.937732 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 30 13:22:40.937812 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 30 13:22:40.939670 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 30 13:22:40.939768 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 13:22:40.939841 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:22:40.939902 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:22:40.939973 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:22:40.940048 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 30 13:22:40.940112 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 30 13:22:40.940172 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 30 13:22:40.940247 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 30 13:22:40.940310 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 30 13:22:40.940375 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 30 13:22:40.940457 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 30 13:22:40.941781 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 30 13:22:40.941852 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 30 13:22:40.941926 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 30 13:22:40.941990 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 30 13:22:40.942060 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 30 13:22:40.942134 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 30 13:22:40.942197 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 30 13:22:40.942261 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 30 13:22:40.942341 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 30 13:22:40.942405 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 30 13:22:40.943711 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 30 13:22:40.943841 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 30 13:22:40.943904 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 30 13:22:40.943964 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 30 13:22:40.944034 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 30 13:22:40.944094 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 30 13:22:40.944165 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 30 13:22:40.944233 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 30 13:22:40.944297 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 30 13:22:40.944365 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 30 13:22:40.944376 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:22:40.944384 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:22:40.944392 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:22:40.944402 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:22:40.944410 kernel: iommu: Default domain type: Translated Jan 30 13:22:40.944418 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:22:40.944426 kernel: efivars: Registered efivars operations Jan 30 13:22:40.944434 kernel: vgaarb: loaded Jan 30 13:22:40.944442 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:22:40.944450 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:22:40.944458 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:22:40.944479 kernel: pnp: PnP ACPI init Jan 30 13:22:40.944567 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:22:40.944579 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:22:40.944599 kernel: NET: Registered PF_INET protocol family Jan 30 13:22:40.944607 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:22:40.944616 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:22:40.944624 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:22:40.944632 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:22:40.944640 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:22:40.944652 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:22:40.944660 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:22:40.944668 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:22:40.944676 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:22:40.944763 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 30 13:22:40.944776 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:22:40.944783 kernel: kvm [1]: HYP mode not available Jan 30 13:22:40.944791 kernel: Initialise system trusted keyrings Jan 30 13:22:40.944799 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:22:40.944810 kernel: Key type asymmetric registered Jan 30 13:22:40.944817 kernel: Asymmetric key parser 'x509' registered Jan 30 13:22:40.944825 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:22:40.944835 kernel: io scheduler mq-deadline registered Jan 30 13:22:40.944843 kernel: io scheduler kyber registered Jan 30 13:22:40.944851 kernel: io scheduler bfq registered Jan 30 13:22:40.944860 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 13:22:40.944932 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 30 13:22:40.945004 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 30 13:22:40.945091 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.945167 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 30 13:22:40.945240 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 30 13:22:40.945308 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.945380 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 30 13:22:40.945449 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 30 13:22:40.946912 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.947010 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 30 13:22:40.947079 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 30 13:22:40.947148 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.947219 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 30 13:22:40.947286 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 30 13:22:40.947363 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.947434 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 30 13:22:40.947544 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 30 13:22:40.947639 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.947718 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 30 13:22:40.947787 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 30 13:22:40.947864 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.947937 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 30 13:22:40.948007 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 30 13:22:40.948075 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.948086 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 30 13:22:40.948156 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 30 13:22:40.948227 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 30 13:22:40.948295 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 30 13:22:40.948306 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:22:40.948315 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:22:40.948323 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:22:40.948395 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 30 13:22:40.948489 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 30 13:22:40.948501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:22:40.948513 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 13:22:40.948627 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 30 13:22:40.948640 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 30 13:22:40.948648 kernel: thunder_xcv, ver 1.0 Jan 30 13:22:40.948656 kernel: thunder_bgx, ver 1.0 Jan 30 13:22:40.948663 kernel: nicpf, ver 1.0 Jan 30 13:22:40.948671 kernel: nicvf, ver 1.0 Jan 30 13:22:40.948762 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:22:40.948833 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:22:40 UTC (1738243360) Jan 30 13:22:40.948844 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:22:40.948852 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:22:40.948861 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:22:40.948868 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:22:40.948876 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:22:40.948884 kernel: Segment Routing with IPv6 Jan 30 13:22:40.948892 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:22:40.948900 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:22:40.948910 kernel: Key type dns_resolver registered Jan 30 13:22:40.948918 kernel: registered taskstats version 1 Jan 30 13:22:40.948925 kernel: Loading compiled-in X.509 certificates Jan 30 13:22:40.948933 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:22:40.948943 kernel: Key type .fscrypt registered Jan 30 13:22:40.948951 kernel: Key type fscrypt-provisioning registered Jan 30 13:22:40.948958 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:22:40.948966 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:22:40.948976 kernel: ima: No architecture policies found Jan 30 13:22:40.948984 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:22:40.948991 kernel: clk: Disabling unused clocks Jan 30 13:22:40.948999 kernel: Freeing unused kernel memory: 39936K Jan 30 13:22:40.949007 kernel: Run /init as init process Jan 30 13:22:40.949015 kernel: with arguments: Jan 30 13:22:40.949022 kernel: /init Jan 30 13:22:40.949030 kernel: with environment: Jan 30 13:22:40.949037 kernel: HOME=/ Jan 30 13:22:40.949045 kernel: TERM=linux Jan 30 13:22:40.949054 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:22:40.949064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:40.949074 systemd[1]: Detected virtualization kvm. Jan 30 13:22:40.949083 systemd[1]: Detected architecture arm64. Jan 30 13:22:40.949091 systemd[1]: Running in initrd. Jan 30 13:22:40.949099 systemd[1]: No hostname configured, using default hostname. Jan 30 13:22:40.949107 systemd[1]: Hostname set to . Jan 30 13:22:40.949117 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:22:40.949125 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:22:40.949134 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:40.949142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:40.949151 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:22:40.949160 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:40.949170 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:22:40.949179 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:22:40.949191 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:22:40.949200 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:22:40.949208 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:40.949217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:40.949225 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:40.949233 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:40.949242 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:40.949252 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:40.949260 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:40.949269 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:40.949277 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:22:40.949286 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:22:40.949294 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:40.949303 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:40.949311 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:40.949321 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:40.949329 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:22:40.949338 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:40.949346 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:22:40.949354 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:22:40.949363 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:40.949372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:40.949380 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:40.949389 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:40.949399 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:40.949407 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:22:40.949439 systemd-journald[238]: Collecting audit messages is disabled. Jan 30 13:22:40.949464 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:22:40.949621 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:40.949634 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:22:40.949643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:40.949651 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:40.949665 kernel: Bridge firewalling registered Jan 30 13:22:40.949673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:40.949682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:40.949692 systemd-journald[238]: Journal started Jan 30 13:22:40.949719 systemd-journald[238]: Runtime Journal (/run/log/journal/703b74b8a80f41a0be4bc518a8f70304) is 8.0M, max 76.6M, 68.6M free. Jan 30 13:22:40.906001 systemd-modules-load[239]: Inserted module 'overlay' Jan 30 13:22:40.936885 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 30 13:22:40.955030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:40.955094 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:40.958110 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:40.977796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:40.979869 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:40.982071 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:40.986886 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:22:40.988542 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:40.995796 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:41.002061 dracut-cmdline[272]: dracut-dracut-053 Jan 30 13:22:41.005506 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:22:41.032056 systemd-resolved[275]: Positive Trust Anchors: Jan 30 13:22:41.032074 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:41.032104 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:41.042037 systemd-resolved[275]: Defaulting to hostname 'linux'. Jan 30 13:22:41.044276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:41.045657 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:41.097533 kernel: SCSI subsystem initialized Jan 30 13:22:41.102506 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:22:41.112536 kernel: iscsi: registered transport (tcp) Jan 30 13:22:41.126554 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:22:41.126662 kernel: QLogic iSCSI HBA Driver Jan 30 13:22:41.186196 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:41.199808 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:22:41.219554 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:22:41.219667 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:22:41.220509 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:22:41.273533 kernel: raid6: neonx8 gen() 15685 MB/s Jan 30 13:22:41.290514 kernel: raid6: neonx4 gen() 15726 MB/s Jan 30 13:22:41.309682 kernel: raid6: neonx2 gen() 13145 MB/s Jan 30 13:22:41.324508 kernel: raid6: neonx1 gen() 10441 MB/s Jan 30 13:22:41.341543 kernel: raid6: int64x8 gen() 6747 MB/s Jan 30 13:22:41.358535 kernel: raid6: int64x4 gen() 7309 MB/s Jan 30 13:22:41.375780 kernel: raid6: int64x2 gen() 6080 MB/s Jan 30 13:22:41.392539 kernel: raid6: int64x1 gen() 5017 MB/s Jan 30 13:22:41.392633 kernel: raid6: using algorithm neonx4 gen() 15726 MB/s Jan 30 13:22:41.409548 kernel: raid6: .... xor() 12358 MB/s, rmw enabled Jan 30 13:22:41.409669 kernel: raid6: using neon recovery algorithm Jan 30 13:22:41.414638 kernel: xor: measuring software checksum speed Jan 30 13:22:41.414725 kernel: 8regs : 21607 MB/sec Jan 30 13:22:41.414749 kernel: 32regs : 21704 MB/sec Jan 30 13:22:41.414785 kernel: arm64_neon : 27804 MB/sec Jan 30 13:22:41.415513 kernel: xor: using function: arm64_neon (27804 MB/sec) Jan 30 13:22:41.466545 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:22:41.484553 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:41.491713 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:41.504710 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 30 13:22:41.508125 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:41.518697 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:22:41.534318 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jan 30 13:22:41.569614 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:41.577767 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:41.628546 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:41.637111 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:22:41.666098 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:41.667229 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:41.668309 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:41.669239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:41.677903 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:22:41.697595 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:41.763495 kernel: scsi host0: Virtio SCSI HBA Jan 30 13:22:41.773693 kernel: ACPI: bus type USB registered Jan 30 13:22:41.773784 kernel: usbcore: registered new interface driver usbfs Jan 30 13:22:41.775535 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:22:41.775659 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 30 13:22:41.776892 kernel: usbcore: registered new interface driver hub Jan 30 13:22:41.781521 kernel: usbcore: registered new device driver usb Jan 30 13:22:41.788750 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:41.789098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:41.791776 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:41.793920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:41.794425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:41.796285 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:41.801174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:41.819538 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 30 13:22:41.823105 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 30 13:22:41.823240 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:22:41.823251 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:22:41.830822 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:41.839637 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 30 13:22:41.854028 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 30 13:22:41.854184 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 30 13:22:41.854281 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 30 13:22:41.854365 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 13:22:41.854446 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 13:22:41.861531 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 30 13:22:41.861674 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 30 13:22:41.861763 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:22:41.861775 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 30 13:22:41.861863 kernel: GPT:17805311 != 80003071 Jan 30 13:22:41.861873 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 30 13:22:41.861953 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:22:41.861964 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 30 13:22:41.862044 kernel: GPT:17805311 != 80003071 Jan 30 13:22:41.862054 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:22:41.862063 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:41.862073 kernel: hub 1-0:1.0: USB hub found Jan 30 13:22:41.862185 kernel: hub 1-0:1.0: 4 ports detected Jan 30 13:22:41.862266 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 30 13:22:41.862367 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 30 13:22:41.862490 kernel: hub 2-0:1.0: USB hub found Jan 30 13:22:41.862653 kernel: hub 2-0:1.0: 4 ports detected Jan 30 13:22:41.842916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:22:41.877277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:41.914961 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (507) Jan 30 13:22:41.916561 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 30 13:22:41.919153 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (510) Jan 30 13:22:41.924388 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 30 13:22:41.936446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 13:22:41.941892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 30 13:22:41.942633 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 30 13:22:41.947786 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:22:41.957647 disk-uuid[579]: Primary Header is updated. Jan 30 13:22:41.957647 disk-uuid[579]: Secondary Entries is updated. Jan 30 13:22:41.957647 disk-uuid[579]: Secondary Header is updated. Jan 30 13:22:41.977558 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:42.094553 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 30 13:22:42.337631 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 30 13:22:42.475486 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 30 13:22:42.475562 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 30 13:22:42.475802 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 30 13:22:42.529640 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 30 13:22:42.529914 kernel: usbcore: registered new interface driver usbhid Jan 30 13:22:42.529927 kernel: usbhid: USB HID core driver Jan 30 13:22:42.985535 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:22:42.987083 disk-uuid[580]: The operation has completed successfully. Jan 30 13:22:43.065205 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:22:43.065348 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:22:43.078872 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:22:43.086531 sh[591]: Success Jan 30 13:22:43.100410 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:22:43.173393 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:22:43.190743 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:22:43.194529 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:22:43.218003 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:22:43.218094 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:43.218120 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:22:43.218141 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:22:43.218774 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:22:43.227917 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:22:43.231465 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:22:43.232213 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:22:43.242789 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:22:43.246757 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:22:43.265686 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:43.265766 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:43.265794 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:43.274586 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:22:43.274665 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:43.288120 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:22:43.290545 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:43.301536 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:22:43.307768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:22:43.405196 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:43.415767 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:43.418592 ignition[691]: Ignition 2.20.0 Jan 30 13:22:43.418600 ignition[691]: Stage: fetch-offline Jan 30 13:22:43.418645 ignition[691]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:43.426529 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:43.418653 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:43.418902 ignition[691]: parsed url from cmdline: "" Jan 30 13:22:43.418906 ignition[691]: no config URL provided Jan 30 13:22:43.418912 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:43.418920 ignition[691]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:22:43.418926 ignition[691]: failed to fetch config: resource requires networking Jan 30 13:22:43.419267 ignition[691]: Ignition finished successfully Jan 30 13:22:43.447678 systemd-networkd[778]: lo: Link UP Jan 30 13:22:43.447688 systemd-networkd[778]: lo: Gained carrier Jan 30 13:22:43.449425 systemd-networkd[778]: Enumeration completed Jan 30 13:22:43.449663 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:43.450641 systemd[1]: Reached target network.target - Network. Jan 30 13:22:43.452052 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:43.452055 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:43.453067 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:43.453070 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:43.453797 systemd-networkd[778]: eth0: Link UP Jan 30 13:22:43.453801 systemd-networkd[778]: eth0: Gained carrier Jan 30 13:22:43.453811 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:43.459882 systemd-networkd[778]: eth1: Link UP Jan 30 13:22:43.459886 systemd-networkd[778]: eth1: Gained carrier Jan 30 13:22:43.459898 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:43.461782 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:22:43.476997 ignition[781]: Ignition 2.20.0 Jan 30 13:22:43.477714 ignition[781]: Stage: fetch Jan 30 13:22:43.478035 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:43.478049 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:43.478184 ignition[781]: parsed url from cmdline: "" Jan 30 13:22:43.478187 ignition[781]: no config URL provided Jan 30 13:22:43.478193 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:22:43.478202 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:22:43.478381 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 30 13:22:43.479647 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 30 13:22:43.482560 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:22:43.512605 systemd-networkd[778]: eth0: DHCPv4 address 78.46.226.143/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 13:22:43.679760 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 30 13:22:43.685692 ignition[781]: GET result: OK Jan 30 13:22:43.685883 ignition[781]: parsing config with SHA512: 7cd95080fa92d3f9afb8e47f9aa17bdaa2f1e158f6a2b5b69c06bc4fce62527badba38b01bdfb9c1b23539382a0df579f412e815e6203dab12f52c6eebc17bdb Jan 30 13:22:43.692495 unknown[781]: fetched base config from "system" Jan 30 13:22:43.692512 unknown[781]: fetched base config from "system" Jan 30 13:22:43.693031 ignition[781]: fetch: fetch complete Jan 30 13:22:43.692517 unknown[781]: fetched user config from "hetzner" Jan 30 13:22:43.693036 ignition[781]: fetch: fetch passed Jan 30 13:22:43.695332 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:22:43.693090 ignition[781]: Ignition finished successfully Jan 30 13:22:43.706049 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:22:43.726128 ignition[788]: Ignition 2.20.0 Jan 30 13:22:43.726638 ignition[788]: Stage: kargs Jan 30 13:22:43.727681 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:43.727709 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:43.729515 ignition[788]: kargs: kargs passed Jan 30 13:22:43.729647 ignition[788]: Ignition finished successfully Jan 30 13:22:43.732067 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:22:43.738817 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:22:43.755085 ignition[795]: Ignition 2.20.0 Jan 30 13:22:43.755549 ignition[795]: Stage: disks Jan 30 13:22:43.755991 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:43.756007 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:43.757332 ignition[795]: disks: disks passed Jan 30 13:22:43.757403 ignition[795]: Ignition finished successfully Jan 30 13:22:43.760092 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:22:43.762063 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:43.763641 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:22:43.764223 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:43.764788 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:43.765860 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:43.774880 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:22:43.799250 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:22:43.804541 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:22:43.810912 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:22:43.886513 kernel: EXT4-fs (sda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:22:43.887419 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:22:43.889040 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:43.901647 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:43.905022 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:22:43.907728 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:22:43.909632 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:22:43.911862 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:43.914827 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:22:43.922710 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:22:43.927960 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (812) Jan 30 13:22:43.931025 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:43.931086 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:43.932496 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:43.937081 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:22:43.937153 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:43.942655 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:43.984814 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:22:43.986310 coreos-metadata[814]: Jan 30 13:22:43.986 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 30 13:22:43.988556 coreos-metadata[814]: Jan 30 13:22:43.988 INFO Fetch successful Jan 30 13:22:43.988556 coreos-metadata[814]: Jan 30 13:22:43.988 INFO wrote hostname ci-4186-1-0-1-a1b218d3fd to /sysroot/etc/hostname Jan 30 13:22:43.994017 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:43.996001 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:22:44.001628 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:22:44.006417 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:22:44.125178 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:44.137767 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:22:44.140831 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:22:44.152669 kernel: BTRFS info (device sda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:44.182736 ignition[929]: INFO : Ignition 2.20.0 Jan 30 13:22:44.182736 ignition[929]: INFO : Stage: mount Jan 30 13:22:44.184415 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:44.184415 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:44.185975 ignition[929]: INFO : mount: mount passed Jan 30 13:22:44.185975 ignition[929]: INFO : Ignition finished successfully Jan 30 13:22:44.186863 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:22:44.188577 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:22:44.195805 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:22:44.217165 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:22:44.228942 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:22:44.247608 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (941) Jan 30 13:22:44.249528 kernel: BTRFS info (device sda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:22:44.249599 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:22:44.249612 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:22:44.254649 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:22:44.254726 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:22:44.258004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:22:44.284053 ignition[958]: INFO : Ignition 2.20.0 Jan 30 13:22:44.284053 ignition[958]: INFO : Stage: files Jan 30 13:22:44.285270 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:44.285270 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:44.286818 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:22:44.286818 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:22:44.286818 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:22:44.291925 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:22:44.293791 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:22:44.295581 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:22:44.294851 unknown[958]: wrote ssh authorized keys file for user: core Jan 30 13:22:44.300945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:22:44.300945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 13:22:44.424810 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:22:44.642665 systemd-networkd[778]: eth0: Gained IPv6LL Jan 30 13:22:44.796685 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 13:22:44.796685 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:22:44.796685 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 30 13:22:44.899214 systemd-networkd[778]: eth1: Gained IPv6LL Jan 30 13:22:45.366106 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:22:45.480684 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 30 13:22:45.482050 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:22:45.484959 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 13:22:45.743288 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:22:46.018406 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:22:46.018406 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:22:46.021489 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:46.022675 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:22:46.022675 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:22:46.022675 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:22:46.022675 ignition[958]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 13:22:46.022675 ignition[958]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 30 13:22:46.022675 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:22:46.022675 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:46.022675 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:22:46.022675 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:46.022675 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:22:46.022675 ignition[958]: INFO : files: files passed Jan 30 13:22:46.022675 ignition[958]: INFO : Ignition finished successfully Jan 30 13:22:46.024844 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:22:46.032711 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:22:46.036951 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:22:46.040732 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:22:46.041422 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:22:46.055484 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:46.055484 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:46.057955 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:22:46.060733 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:46.062947 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:22:46.068812 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:22:46.115830 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:22:46.115979 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:22:46.117271 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:22:46.118130 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:22:46.119006 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:22:46.124762 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:22:46.141420 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:46.157828 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:22:46.170686 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:46.172166 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:46.172980 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:22:46.174175 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:22:46.174353 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:22:46.175925 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:22:46.177158 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:22:46.179325 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:22:46.180160 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:22:46.183087 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:22:46.184868 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:22:46.186650 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:22:46.187865 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:22:46.189249 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:22:46.190485 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:22:46.191506 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:22:46.191698 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:22:46.192907 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:46.193591 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:46.194768 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:22:46.194850 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:46.196012 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:22:46.196184 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:22:46.197817 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:22:46.197953 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:22:46.199256 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:22:46.199352 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:22:46.200450 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:22:46.200603 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:22:46.209814 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:22:46.214587 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:22:46.217591 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:22:46.217804 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:46.222732 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:22:46.222856 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:22:46.231012 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:22:46.231189 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:22:46.234310 ignition[1010]: INFO : Ignition 2.20.0 Jan 30 13:22:46.234310 ignition[1010]: INFO : Stage: umount Jan 30 13:22:46.234310 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:22:46.234310 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 30 13:22:46.238434 ignition[1010]: INFO : umount: umount passed Jan 30 13:22:46.238434 ignition[1010]: INFO : Ignition finished successfully Jan 30 13:22:46.240754 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:22:46.240886 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:22:46.242227 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:22:46.242278 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:22:46.243844 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:22:46.243918 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:22:46.245628 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:22:46.245742 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:22:46.246513 systemd[1]: Stopped target network.target - Network. Jan 30 13:22:46.248529 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:22:46.248697 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:22:46.250369 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:22:46.251217 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:22:46.257560 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:46.258704 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:22:46.260060 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:22:46.261084 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:22:46.261143 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:22:46.262404 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:22:46.262491 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:22:46.264215 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:22:46.264315 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:22:46.265943 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:22:46.266013 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:22:46.267264 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:22:46.270654 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:22:46.274892 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 30 13:22:46.280265 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:22:46.281544 systemd-networkd[778]: eth1: DHCPv6 lease lost Jan 30 13:22:46.283318 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:22:46.283456 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:22:46.286210 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:22:46.287690 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:22:46.290071 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:22:46.290164 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:46.304752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:22:46.305487 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:22:46.305602 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:22:46.308591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:22:46.308659 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:46.309314 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:22:46.309359 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:46.309999 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:22:46.310040 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:46.310993 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:46.326837 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:22:46.328067 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:22:46.329679 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:22:46.329809 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:46.332626 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:22:46.332715 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:46.333416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:22:46.333449 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:46.335156 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:22:46.335219 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:22:46.336602 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:22:46.336657 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:22:46.338014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:22:46.338067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:22:46.339281 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:22:46.339331 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:22:46.350053 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:22:46.351272 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:22:46.351523 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:46.353616 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:22:46.353686 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:46.354332 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:22:46.354374 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:46.355137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:46.355185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:46.356264 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:22:46.356376 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:22:46.367365 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:22:46.367614 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:22:46.370042 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:22:46.375778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:22:46.389193 systemd[1]: Switching root. Jan 30 13:22:46.425161 systemd-journald[238]: Journal stopped Jan 30 13:22:47.374854 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 30 13:22:47.374937 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:22:47.374950 kernel: SELinux: policy capability open_perms=1 Jan 30 13:22:47.374964 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:22:47.374977 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:22:47.374987 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:22:47.374997 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:22:47.375011 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:22:47.375020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:22:47.375029 kernel: audit: type=1403 audit(1738243366.588:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:22:47.375039 systemd[1]: Successfully loaded SELinux policy in 38.250ms. Jan 30 13:22:47.375065 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.419ms. Jan 30 13:22:47.375076 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:22:47.375087 systemd[1]: Detected virtualization kvm. Jan 30 13:22:47.375098 systemd[1]: Detected architecture arm64. Jan 30 13:22:47.375110 systemd[1]: Detected first boot. Jan 30 13:22:47.375124 systemd[1]: Hostname set to . Jan 30 13:22:47.375134 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:22:47.375144 zram_generator::config[1054]: No configuration found. Jan 30 13:22:47.375155 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:22:47.375164 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:22:47.375174 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:22:47.375184 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:22:47.375195 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:22:47.375207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:22:47.375218 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:22:47.375228 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:22:47.375240 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:22:47.375251 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:22:47.375261 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:22:47.375271 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:22:47.375282 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:22:47.375294 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:22:47.375304 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:22:47.375314 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:22:47.375325 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:22:47.375335 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:22:47.375350 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:22:47.375360 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:22:47.375371 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:22:47.375383 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:22:47.375393 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:22:47.375403 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:22:47.375414 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:22:47.375424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:22:47.375435 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:22:47.375445 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:22:47.375455 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:22:47.375512 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:22:47.375527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:22:47.375537 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:22:47.375584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:22:47.375596 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:22:47.375606 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:22:47.375617 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:22:47.375634 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:22:47.375646 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:22:47.375657 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:22:47.375668 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:22:47.375678 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:22:47.375688 systemd[1]: Reached target machines.target - Containers. Jan 30 13:22:47.375699 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:22:47.375712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:47.375722 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:22:47.375733 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:22:47.375743 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:47.375754 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:22:47.375764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:47.375774 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:22:47.375785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:47.375797 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:22:47.375811 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:22:47.375821 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:22:47.375831 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:22:47.375841 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:22:47.375852 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:22:47.375862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:22:47.375874 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:22:47.375884 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:22:47.375896 kernel: loop: module loaded Jan 30 13:22:47.375909 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:22:47.375920 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:22:47.375930 systemd[1]: Stopped verity-setup.service. Jan 30 13:22:47.375940 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:22:47.375952 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:22:47.375962 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:22:47.375973 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:22:47.375983 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:22:47.376028 systemd-journald[1124]: Collecting audit messages is disabled. Jan 30 13:22:47.376050 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:22:47.376062 systemd-journald[1124]: Journal started Jan 30 13:22:47.376092 systemd-journald[1124]: Runtime Journal (/run/log/journal/703b74b8a80f41a0be4bc518a8f70304) is 8.0M, max 76.6M, 68.6M free. Jan 30 13:22:47.377030 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:22:47.132845 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:22:47.157303 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:22:47.158098 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:22:47.379526 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:22:47.383174 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:22:47.383375 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:22:47.384942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:47.385348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:47.393500 kernel: ACPI: bus type drm_connector registered Jan 30 13:22:47.396842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:47.397043 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:47.398143 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:22:47.400695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:22:47.401731 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:47.403577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:47.407695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:22:47.421650 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:22:47.424442 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:22:47.426574 kernel: fuse: init (API version 7.39) Jan 30 13:22:47.430572 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:22:47.431997 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:22:47.432180 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:22:47.438868 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:22:47.444787 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:22:47.451670 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:22:47.452341 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:22:47.452386 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:22:47.454797 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:22:47.462805 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:22:47.466776 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:22:47.467692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:47.479165 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:22:47.485857 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:22:47.490707 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:47.496730 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:22:47.497365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:47.500887 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:22:47.506986 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:22:47.509867 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:22:47.512407 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:22:47.513788 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:22:47.515499 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:22:47.539389 systemd-journald[1124]: Time spent on flushing to /var/log/journal/703b74b8a80f41a0be4bc518a8f70304 is 68.344ms for 1134 entries. Jan 30 13:22:47.539389 systemd-journald[1124]: System Journal (/var/log/journal/703b74b8a80f41a0be4bc518a8f70304) is 8.0M, max 584.8M, 576.8M free. Jan 30 13:22:47.657674 systemd-journald[1124]: Received client request to flush runtime journal. Jan 30 13:22:47.657782 kernel: loop0: detected capacity change from 0 to 194096 Jan 30 13:22:47.657827 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:22:47.563842 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:22:47.577952 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:22:47.582782 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:22:47.595082 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:22:47.609818 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:22:47.615725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:22:47.653367 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:22:47.653460 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 30 13:22:47.653505 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jan 30 13:22:47.662091 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:22:47.676981 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:22:47.691133 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:22:47.694036 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:22:47.696351 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:22:47.715630 kernel: loop1: detected capacity change from 0 to 116784 Jan 30 13:22:47.741611 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:22:47.754208 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:22:47.754891 kernel: loop2: detected capacity change from 0 to 8 Jan 30 13:22:47.774066 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 30 13:22:47.774085 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Jan 30 13:22:47.786557 kernel: loop3: detected capacity change from 0 to 113552 Jan 30 13:22:47.781941 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:22:47.830854 kernel: loop4: detected capacity change from 0 to 194096 Jan 30 13:22:47.868507 kernel: loop5: detected capacity change from 0 to 116784 Jan 30 13:22:47.885505 kernel: loop6: detected capacity change from 0 to 8 Jan 30 13:22:47.893237 kernel: loop7: detected capacity change from 0 to 113552 Jan 30 13:22:47.909059 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 30 13:22:47.913108 (sd-merge)[1197]: Merged extensions into '/usr'. Jan 30 13:22:47.922674 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:22:47.922697 systemd[1]: Reloading... Jan 30 13:22:48.092344 zram_generator::config[1219]: No configuration found. Jan 30 13:22:48.118589 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:22:48.220987 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:48.267455 systemd[1]: Reloading finished in 344 ms. Jan 30 13:22:48.308239 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:22:48.313805 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:22:48.322759 systemd[1]: Starting ensure-sysext.service... Jan 30 13:22:48.326359 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:22:48.335996 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:22:48.336018 systemd[1]: Reloading... Jan 30 13:22:48.415639 zram_generator::config[1286]: No configuration found. Jan 30 13:22:48.414980 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:22:48.417054 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:22:48.421349 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:22:48.423869 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 30 13:22:48.425782 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jan 30 13:22:48.440185 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:22:48.442562 systemd-tmpfiles[1261]: Skipping /boot Jan 30 13:22:48.461317 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:22:48.463077 systemd-tmpfiles[1261]: Skipping /boot Jan 30 13:22:48.572891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:22:48.620207 systemd[1]: Reloading finished in 283 ms. Jan 30 13:22:48.638339 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:22:48.647134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:22:48.658918 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:22:48.663780 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:22:48.672823 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:22:48.681009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:22:48.686938 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:22:48.692251 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:22:48.699311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:48.711279 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:48.715618 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:48.718697 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:48.720054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:48.731893 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:22:48.734045 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:48.734208 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:48.737073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:48.739511 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:22:48.740198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:48.746373 systemd[1]: Finished ensure-sysext.service. Jan 30 13:22:48.750868 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:22:48.773142 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:22:48.798287 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:48.799508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:48.808148 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:22:48.810543 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:22:48.813549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:48.817538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:48.820338 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:48.833309 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:22:48.833995 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:22:48.834514 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:48.834780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:48.838038 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:22:48.838240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:22:48.840308 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:48.845313 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 30 13:22:48.848457 augenrules[1366]: No rules Jan 30 13:22:48.850707 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:22:48.852549 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:22:48.853693 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:22:48.877397 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:22:48.883206 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:22:48.897071 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:22:48.985987 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:22:48.989001 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:22:49.030231 systemd-resolved[1332]: Positive Trust Anchors: Jan 30 13:22:49.032609 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:22:49.032654 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:22:49.037815 systemd-networkd[1386]: lo: Link UP Jan 30 13:22:49.037824 systemd-networkd[1386]: lo: Gained carrier Jan 30 13:22:49.038841 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:22:49.038930 systemd-networkd[1386]: Enumeration completed Jan 30 13:22:49.039030 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:22:49.042970 systemd-resolved[1332]: Using system hostname 'ci-4186-1-0-1-a1b218d3fd'. Jan 30 13:22:49.072157 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:22:49.072938 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:22:49.074192 systemd[1]: Reached target network.target - Network. Jan 30 13:22:49.076697 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:22:49.127501 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:22:49.170405 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:49.170420 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:49.171279 systemd-networkd[1386]: eth0: Link UP Jan 30 13:22:49.171293 systemd-networkd[1386]: eth0: Gained carrier Jan 30 13:22:49.171316 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:49.205082 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:49.205094 systemd-networkd[1386]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:22:49.207727 systemd-networkd[1386]: eth1: Link UP Jan 30 13:22:49.207873 systemd-networkd[1386]: eth1: Gained carrier Jan 30 13:22:49.207943 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:22:49.231744 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1383) Jan 30 13:22:49.235186 systemd-networkd[1386]: eth0: DHCPv4 address 78.46.226.143/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 30 13:22:49.238179 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Jan 30 13:22:49.245501 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 30 13:22:49.246898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:22:49.267213 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:22:49.268855 systemd-networkd[1386]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:22:49.273118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:22:49.274840 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Jan 30 13:22:49.281614 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Jan 30 13:22:49.283950 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:22:49.286453 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:22:49.286520 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:22:49.287181 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:22:49.287499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:22:49.294253 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:22:49.295731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:22:49.298025 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:22:49.298487 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:22:49.321312 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:22:49.324192 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:22:49.343507 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 30 13:22:49.343661 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 30 13:22:49.343682 kernel: [drm] features: -context_init Jan 30 13:22:49.343707 kernel: [drm] number of scanouts: 1 Jan 30 13:22:49.343718 kernel: [drm] number of cap sets: 0 Jan 30 13:22:49.347567 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 30 13:22:49.350878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:49.361787 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 13:22:49.372871 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:22:49.373314 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:49.374510 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 30 13:22:49.376758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 30 13:22:49.387387 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:22:49.390796 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:22:49.406841 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:22:49.477885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:22:49.496102 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:22:49.504825 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:22:49.519904 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:22:49.551391 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:22:49.554122 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:22:49.554970 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:22:49.555917 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:22:49.557144 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:22:49.558207 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:22:49.559139 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:22:49.560102 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:22:49.560908 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:22:49.560945 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:22:49.561455 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:22:49.564296 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:22:49.567007 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:22:49.571930 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:22:49.574457 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:22:49.576105 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:22:49.577031 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:22:49.577700 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:22:49.578388 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:22:49.578427 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:22:49.581724 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:22:49.587835 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:22:49.591856 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:22:49.597111 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:22:49.608742 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:22:49.614787 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:22:49.615754 jq[1452]: false Jan 30 13:22:49.615396 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:22:49.622427 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:22:49.632869 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:22:49.638912 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 30 13:22:49.650616 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:22:49.666578 extend-filesystems[1455]: Found loop4 Jan 30 13:22:49.666578 extend-filesystems[1455]: Found loop5 Jan 30 13:22:49.666578 extend-filesystems[1455]: Found loop6 Jan 30 13:22:49.672493 extend-filesystems[1455]: Found loop7 Jan 30 13:22:49.672493 extend-filesystems[1455]: Found sda Jan 30 13:22:49.672493 extend-filesystems[1455]: Found sda1 Jan 30 13:22:49.672493 extend-filesystems[1455]: Found sda2 Jan 30 13:22:49.672493 extend-filesystems[1455]: Found sda3 Jan 30 13:22:49.672493 extend-filesystems[1455]: Found usr Jan 30 13:22:49.672493 extend-filesystems[1455]: Found sda4 Jan 30 13:22:49.672493 extend-filesystems[1455]: Found sda6 Jan 30 13:22:49.672493 extend-filesystems[1455]: Found sda7 Jan 30 13:22:49.672493 extend-filesystems[1455]: Found sda9 Jan 30 13:22:49.672493 extend-filesystems[1455]: Checking size of /dev/sda9 Jan 30 13:22:49.666734 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:22:49.691703 dbus-daemon[1451]: [system] SELinux support is enabled Jan 30 13:22:49.689894 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:22:49.691290 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:22:49.693053 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:22:49.703848 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:22:49.711737 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:22:49.713069 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:22:49.720140 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:22:49.725250 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:22:49.726201 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:22:49.728769 coreos-metadata[1450]: Jan 30 13:22:49.727 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 30 13:22:49.731739 extend-filesystems[1455]: Resized partition /dev/sda9 Jan 30 13:22:49.741954 coreos-metadata[1450]: Jan 30 13:22:49.737 INFO Fetch successful Jan 30 13:22:49.739439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:22:49.739628 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:22:49.741623 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:22:49.741651 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:22:49.745498 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:22:49.750604 coreos-metadata[1450]: Jan 30 13:22:49.746 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 30 13:22:49.758952 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 30 13:22:49.759059 coreos-metadata[1450]: Jan 30 13:22:49.755 INFO Fetch successful Jan 30 13:22:49.789389 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:22:49.793165 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:22:49.801648 jq[1470]: true Jan 30 13:22:49.818205 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:22:49.818801 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:22:49.831175 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:22:49.848977 tar[1475]: linux-arm64/helm Jan 30 13:22:49.858573 update_engine[1467]: I20250130 13:22:49.852658 1467 main.cc:92] Flatcar Update Engine starting Jan 30 13:22:49.859237 jq[1494]: true Jan 30 13:22:49.871042 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:22:49.873922 update_engine[1467]: I20250130 13:22:49.871178 1467 update_check_scheduler.cc:74] Next update check in 3m16s Jan 30 13:22:49.880893 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:22:49.929508 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1403) Jan 30 13:22:49.970395 systemd-logind[1463]: New seat seat0. Jan 30 13:22:50.012021 systemd-logind[1463]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:22:50.012121 systemd-logind[1463]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 30 13:22:50.012879 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:22:50.072638 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:22:50.078495 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 30 13:22:50.079211 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:22:50.102496 extend-filesystems[1480]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 13:22:50.102496 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 30 13:22:50.102496 extend-filesystems[1480]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 30 13:22:50.112268 extend-filesystems[1455]: Resized filesystem in /dev/sda9 Jan 30 13:22:50.112268 extend-filesystems[1455]: Found sr0 Jan 30 13:22:50.105870 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:22:50.106052 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:22:50.139769 bash[1525]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:22:50.142798 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:22:50.154978 systemd[1]: Starting sshkeys.service... Jan 30 13:22:50.182854 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:22:50.188866 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 13:22:50.204088 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 13:22:50.214516 containerd[1492]: time="2025-01-30T13:22:50.207419560Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:22:50.251743 coreos-metadata[1536]: Jan 30 13:22:50.251 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 30 13:22:50.253646 coreos-metadata[1536]: Jan 30 13:22:50.253 INFO Fetch successful Jan 30 13:22:50.259258 unknown[1536]: wrote ssh authorized keys file for user: core Jan 30 13:22:50.297488 update-ssh-keys[1541]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:22:50.299981 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 13:22:50.304733 containerd[1492]: time="2025-01-30T13:22:50.303035800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:50.302465 systemd[1]: Finished sshkeys.service. Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.307779480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.307834800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.307856200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.308040560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.308058000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.308135280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.308150560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.308344880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.308359960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.308374360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:50.308769 containerd[1492]: time="2025-01-30T13:22:50.308384200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:50.309054 containerd[1492]: time="2025-01-30T13:22:50.308458840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:50.309506 containerd[1492]: time="2025-01-30T13:22:50.309243760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:22:50.309506 containerd[1492]: time="2025-01-30T13:22:50.309388600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:22:50.309506 containerd[1492]: time="2025-01-30T13:22:50.309402360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:22:50.309506 containerd[1492]: time="2025-01-30T13:22:50.309509920Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:22:50.309883 containerd[1492]: time="2025-01-30T13:22:50.309576840Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:22:50.328944 containerd[1492]: time="2025-01-30T13:22:50.328789480Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:22:50.328944 containerd[1492]: time="2025-01-30T13:22:50.328880520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:22:50.328944 containerd[1492]: time="2025-01-30T13:22:50.328897840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:22:50.328944 containerd[1492]: time="2025-01-30T13:22:50.328917120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:22:50.328944 containerd[1492]: time="2025-01-30T13:22:50.328932720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:22:50.329557 containerd[1492]: time="2025-01-30T13:22:50.329147640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:22:50.331812 containerd[1492]: time="2025-01-30T13:22:50.331629240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.331882680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.331902040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.331926440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.331941000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.331956800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.331970960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.331987920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.332008080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.332038400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.332052920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.332066080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.332089360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.332103640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332416 containerd[1492]: time="2025-01-30T13:22:50.332116640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332130240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332142320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332158240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332170320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332188360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332205400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332221480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332234520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332245680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332264440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332280680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332303680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332323240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.332862 containerd[1492]: time="2025-01-30T13:22:50.332334440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:22:50.333214 containerd[1492]: time="2025-01-30T13:22:50.332625240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:22:50.333214 containerd[1492]: time="2025-01-30T13:22:50.332649440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:22:50.333214 containerd[1492]: time="2025-01-30T13:22:50.332663280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:22:50.333214 containerd[1492]: time="2025-01-30T13:22:50.332675720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:22:50.333214 containerd[1492]: time="2025-01-30T13:22:50.332686000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.333214 containerd[1492]: time="2025-01-30T13:22:50.332701120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:22:50.333214 containerd[1492]: time="2025-01-30T13:22:50.332713040Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:22:50.333214 containerd[1492]: time="2025-01-30T13:22:50.332723760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:22:50.335714 containerd[1492]: time="2025-01-30T13:22:50.333088920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:22:50.335714 containerd[1492]: time="2025-01-30T13:22:50.333138400Z" level=info msg="Connect containerd service" Jan 30 13:22:50.335714 containerd[1492]: time="2025-01-30T13:22:50.333179560Z" level=info msg="using legacy CRI server" Jan 30 13:22:50.335714 containerd[1492]: time="2025-01-30T13:22:50.333186800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:22:50.335714 containerd[1492]: time="2025-01-30T13:22:50.333454120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:22:50.335714 containerd[1492]: time="2025-01-30T13:22:50.334551680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:22:50.338811 containerd[1492]: time="2025-01-30T13:22:50.338753840Z" level=info msg="Start subscribing containerd event" Jan 30 13:22:50.339024 containerd[1492]: time="2025-01-30T13:22:50.339003240Z" level=info msg="Start recovering state" Jan 30 13:22:50.339177 containerd[1492]: time="2025-01-30T13:22:50.339159120Z" level=info msg="Start event monitor" Jan 30 13:22:50.339263 containerd[1492]: time="2025-01-30T13:22:50.339250400Z" level=info msg="Start snapshots syncer" Jan 30 13:22:50.339323 containerd[1492]: time="2025-01-30T13:22:50.339303160Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:22:50.339377 containerd[1492]: time="2025-01-30T13:22:50.339363920Z" level=info msg="Start streaming server" Jan 30 13:22:50.340511 containerd[1492]: time="2025-01-30T13:22:50.340463760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:22:50.340868 containerd[1492]: time="2025-01-30T13:22:50.340851840Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:22:50.341150 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:22:50.341683 containerd[1492]: time="2025-01-30T13:22:50.341658600Z" level=info msg="containerd successfully booted in 0.140013s" Jan 30 13:22:50.574721 tar[1475]: linux-arm64/LICENSE Jan 30 13:22:50.574922 tar[1475]: linux-arm64/README.md Jan 30 13:22:50.588614 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:22:50.594852 systemd-networkd[1386]: eth1: Gained IPv6LL Jan 30 13:22:50.596305 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Jan 30 13:22:50.604578 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:22:50.609314 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:22:50.625663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:22:50.630625 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:22:50.694603 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:22:50.931231 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:22:50.955639 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:22:50.967968 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:22:50.977416 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:22:50.979718 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:22:50.994282 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:22:51.005809 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:22:51.015033 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:22:51.020013 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:22:51.021215 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:22:51.042733 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 30 13:22:51.044659 systemd-timesyncd[1349]: Network configuration changed, trying to establish connection. Jan 30 13:22:51.491197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:22:51.493034 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:22:51.496685 systemd[1]: Startup finished in 841ms (kernel) + 5.897s (initrd) + 4.946s (userspace) = 11.685s. Jan 30 13:22:51.507381 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:22:51.520051 agetty[1575]: failed to open credentials directory Jan 30 13:22:51.520453 agetty[1576]: failed to open credentials directory Jan 30 13:22:52.203258 kubelet[1582]: E0130 13:22:52.203201 1582 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:22:52.205694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:22:52.205862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:02.369602 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:23:02.380799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:02.505882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:02.514647 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:02.567136 kubelet[1601]: E0130 13:23:02.567079 1601 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:02.572218 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:02.572376 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:12.619222 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:23:12.627915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:12.747963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:12.751710 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:12.801935 kubelet[1618]: E0130 13:23:12.801868 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:12.804696 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:12.804898 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:21.363709 systemd-timesyncd[1349]: Contacted time server 194.50.19.117:123 (2.flatcar.pool.ntp.org). Jan 30 13:23:21.363837 systemd-timesyncd[1349]: Initial clock synchronization to Thu 2025-01-30 13:23:21.412768 UTC. Jan 30 13:23:22.869236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:23:22.878116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:23.005245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:23.010658 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:23.069719 kubelet[1634]: E0130 13:23:23.069634 1634 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:23.072715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:23.072884 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:33.120540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:23:33.134802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:33.292337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:33.312098 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:33.371361 kubelet[1649]: E0130 13:23:33.371224 1649 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:33.374808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:33.375032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:34.857247 update_engine[1467]: I20250130 13:23:34.857092 1467 update_attempter.cc:509] Updating boot flags... Jan 30 13:23:34.939067 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1667) Jan 30 13:23:43.619611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:23:43.633364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:43.745679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:43.751936 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:43.804197 kubelet[1681]: E0130 13:23:43.804146 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:43.807898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:43.808064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:23:53.869491 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 30 13:23:53.875783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:23:54.003791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:23:54.007579 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:23:54.058552 kubelet[1696]: E0130 13:23:54.058427 1696 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:23:54.060975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:23:54.061151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:04.119262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 30 13:24:04.124832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:04.281142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:04.286685 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:04.339288 kubelet[1713]: E0130 13:24:04.339237 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:04.342254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:04.342575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:14.369442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 30 13:24:14.380840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:14.509368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:14.515382 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:14.569080 kubelet[1729]: E0130 13:24:14.568927 1729 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:14.571988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:14.572168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:24.619287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 30 13:24:24.634594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:24.775532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:24.787999 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:24.838916 kubelet[1744]: E0130 13:24:24.838845 1744 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:24.841262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:24.841400 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:34.869359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 30 13:24:34.876811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:35.026844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:35.032702 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:35.086369 kubelet[1761]: E0130 13:24:35.086278 1761 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:35.089380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:35.089793 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:40.860944 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:24:40.870271 systemd[1]: Started sshd@0-78.46.226.143:22-139.178.68.195:56164.service - OpenSSH per-connection server daemon (139.178.68.195:56164). Jan 30 13:24:41.869491 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 56164 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:41.871631 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:41.890940 systemd-logind[1463]: New session 1 of user core. Jan 30 13:24:41.894288 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:24:41.908448 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:24:41.928021 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:24:41.940112 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:24:41.944727 (systemd)[1774]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:24:42.063997 systemd[1774]: Queued start job for default target default.target. Jan 30 13:24:42.075438 systemd[1774]: Created slice app.slice - User Application Slice. Jan 30 13:24:42.075509 systemd[1774]: Reached target paths.target - Paths. Jan 30 13:24:42.075530 systemd[1774]: Reached target timers.target - Timers. Jan 30 13:24:42.077919 systemd[1774]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:24:42.104907 systemd[1774]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:24:42.105017 systemd[1774]: Reached target sockets.target - Sockets. Jan 30 13:24:42.105032 systemd[1774]: Reached target basic.target - Basic System. Jan 30 13:24:42.105105 systemd[1774]: Reached target default.target - Main User Target. Jan 30 13:24:42.105137 systemd[1774]: Startup finished in 151ms. Jan 30 13:24:42.105438 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:24:42.121976 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:24:42.817954 systemd[1]: Started sshd@1-78.46.226.143:22-139.178.68.195:56174.service - OpenSSH per-connection server daemon (139.178.68.195:56174). Jan 30 13:24:43.821094 sshd[1785]: Accepted publickey for core from 139.178.68.195 port 56174 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:43.823659 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:43.848687 systemd-logind[1463]: New session 2 of user core. Jan 30 13:24:43.854736 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:24:44.508517 sshd[1787]: Connection closed by 139.178.68.195 port 56174 Jan 30 13:24:44.510326 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:44.517194 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:24:44.518592 systemd[1]: sshd@1-78.46.226.143:22-139.178.68.195:56174.service: Deactivated successfully. Jan 30 13:24:44.522732 systemd-logind[1463]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:24:44.524178 systemd-logind[1463]: Removed session 2. Jan 30 13:24:44.683265 systemd[1]: Started sshd@2-78.46.226.143:22-139.178.68.195:56182.service - OpenSSH per-connection server daemon (139.178.68.195:56182). Jan 30 13:24:45.119584 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 30 13:24:45.127849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:45.278070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:45.284304 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:45.342906 kubelet[1802]: E0130 13:24:45.342616 1802 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:45.345673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:45.345827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:45.662620 sshd[1792]: Accepted publickey for core from 139.178.68.195 port 56182 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:45.664664 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:45.671835 systemd-logind[1463]: New session 3 of user core. Jan 30 13:24:45.678797 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:24:46.337497 sshd[1810]: Connection closed by 139.178.68.195 port 56182 Jan 30 13:24:46.338791 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:46.346583 systemd-logind[1463]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:24:46.348067 systemd[1]: sshd@2-78.46.226.143:22-139.178.68.195:56182.service: Deactivated successfully. Jan 30 13:24:46.353591 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:24:46.359129 systemd-logind[1463]: Removed session 3. Jan 30 13:24:46.527603 systemd[1]: Started sshd@3-78.46.226.143:22-139.178.68.195:40490.service - OpenSSH per-connection server daemon (139.178.68.195:40490). Jan 30 13:24:47.512163 sshd[1815]: Accepted publickey for core from 139.178.68.195 port 40490 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:47.516537 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:47.530070 systemd-logind[1463]: New session 4 of user core. Jan 30 13:24:47.532925 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:24:48.190831 sshd[1817]: Connection closed by 139.178.68.195 port 40490 Jan 30 13:24:48.190597 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:48.197378 systemd[1]: sshd@3-78.46.226.143:22-139.178.68.195:40490.service: Deactivated successfully. Jan 30 13:24:48.199422 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:24:48.200995 systemd-logind[1463]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:24:48.202420 systemd-logind[1463]: Removed session 4. Jan 30 13:24:48.379486 systemd[1]: Started sshd@4-78.46.226.143:22-139.178.68.195:40494.service - OpenSSH per-connection server daemon (139.178.68.195:40494). Jan 30 13:24:49.385213 sshd[1822]: Accepted publickey for core from 139.178.68.195 port 40494 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:49.387881 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:49.393847 systemd-logind[1463]: New session 5 of user core. Jan 30 13:24:49.404170 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:24:49.931827 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:24:49.932269 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:24:49.955283 sudo[1825]: pam_unix(sudo:session): session closed for user root Jan 30 13:24:50.121798 sshd[1824]: Connection closed by 139.178.68.195 port 40494 Jan 30 13:24:50.122853 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:50.130176 systemd[1]: sshd@4-78.46.226.143:22-139.178.68.195:40494.service: Deactivated successfully. Jan 30 13:24:50.132050 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:24:50.133742 systemd-logind[1463]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:24:50.135787 systemd-logind[1463]: Removed session 5. Jan 30 13:24:50.312063 systemd[1]: Started sshd@5-78.46.226.143:22-139.178.68.195:40510.service - OpenSSH per-connection server daemon (139.178.68.195:40510). Jan 30 13:24:51.310109 sshd[1830]: Accepted publickey for core from 139.178.68.195 port 40510 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:51.311869 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:51.318909 systemd-logind[1463]: New session 6 of user core. Jan 30 13:24:51.320805 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:24:51.836730 sudo[1834]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:24:51.837284 sudo[1834]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:24:51.844844 sudo[1834]: pam_unix(sudo:session): session closed for user root Jan 30 13:24:51.853451 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:24:51.853884 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:24:51.877256 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:24:51.911122 augenrules[1856]: No rules Jan 30 13:24:51.911959 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:24:51.912176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:24:51.914726 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 30 13:24:52.074748 sshd[1832]: Connection closed by 139.178.68.195 port 40510 Jan 30 13:24:52.075400 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Jan 30 13:24:52.081088 systemd-logind[1463]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:24:52.082233 systemd[1]: sshd@5-78.46.226.143:22-139.178.68.195:40510.service: Deactivated successfully. Jan 30 13:24:52.084829 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:24:52.089204 systemd-logind[1463]: Removed session 6. Jan 30 13:24:52.244066 systemd[1]: Started sshd@6-78.46.226.143:22-139.178.68.195:40524.service - OpenSSH per-connection server daemon (139.178.68.195:40524). Jan 30 13:24:53.244606 sshd[1864]: Accepted publickey for core from 139.178.68.195 port 40524 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:24:53.247702 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:24:53.255545 systemd-logind[1463]: New session 7 of user core. Jan 30 13:24:53.264138 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:24:53.765826 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:24:53.766560 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:24:54.105891 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:24:54.106942 (dockerd)[1885]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:24:54.366574 dockerd[1885]: time="2025-01-30T13:24:54.363639906Z" level=info msg="Starting up" Jan 30 13:24:54.475848 dockerd[1885]: time="2025-01-30T13:24:54.475794667Z" level=info msg="Loading containers: start." Jan 30 13:24:54.697500 kernel: Initializing XFRM netlink socket Jan 30 13:24:54.799542 systemd-networkd[1386]: docker0: Link UP Jan 30 13:24:54.828236 dockerd[1885]: time="2025-01-30T13:24:54.828154361Z" level=info msg="Loading containers: done." Jan 30 13:24:54.844510 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2809965490-merged.mount: Deactivated successfully. Jan 30 13:24:54.846669 dockerd[1885]: time="2025-01-30T13:24:54.846601096Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:24:54.846772 dockerd[1885]: time="2025-01-30T13:24:54.846733699Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 13:24:54.846957 dockerd[1885]: time="2025-01-30T13:24:54.846938023Z" level=info msg="Daemon has completed initialization" Jan 30 13:24:54.890860 dockerd[1885]: time="2025-01-30T13:24:54.890598857Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:24:54.892125 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:24:55.369278 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 30 13:24:55.377155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:24:55.497379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:24:55.505967 (kubelet)[2081]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:24:55.563321 kubelet[2081]: E0130 13:24:55.563270 2081 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:24:55.565978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:24:55.566208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:24:56.044686 containerd[1492]: time="2025-01-30T13:24:56.044371304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:24:56.730459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3439539105.mount: Deactivated successfully. Jan 30 13:24:58.851564 containerd[1492]: time="2025-01-30T13:24:58.851199379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:58.854803 containerd[1492]: time="2025-01-30T13:24:58.853968265Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 30 13:24:58.856690 containerd[1492]: time="2025-01-30T13:24:58.856574228Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:58.861013 containerd[1492]: time="2025-01-30T13:24:58.860451052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:24:58.861945 containerd[1492]: time="2025-01-30T13:24:58.861896676Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.817475932s" Jan 30 13:24:58.861945 containerd[1492]: time="2025-01-30T13:24:58.861944677Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 13:24:58.892069 containerd[1492]: time="2025-01-30T13:24:58.892016134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:25:01.932345 containerd[1492]: time="2025-01-30T13:25:01.932252795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:01.934062 containerd[1492]: time="2025-01-30T13:25:01.933725698Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 30 13:25:01.936316 containerd[1492]: time="2025-01-30T13:25:01.935914732Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:01.939801 containerd[1492]: time="2025-01-30T13:25:01.939390825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:01.941101 containerd[1492]: time="2025-01-30T13:25:01.940899489Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 3.048460868s" Jan 30 13:25:01.941101 containerd[1492]: time="2025-01-30T13:25:01.940948729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 13:25:01.966844 containerd[1492]: time="2025-01-30T13:25:01.966481043Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:25:04.201339 containerd[1492]: time="2025-01-30T13:25:04.201231903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:04.203758 containerd[1492]: time="2025-01-30T13:25:04.203171691Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 30 13:25:04.205501 containerd[1492]: time="2025-01-30T13:25:04.204868835Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:04.208906 containerd[1492]: time="2025-01-30T13:25:04.208843373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:04.210976 containerd[1492]: time="2025-01-30T13:25:04.210916962Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 2.243936352s" Jan 30 13:25:04.210976 containerd[1492]: time="2025-01-30T13:25:04.210974243Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 13:25:04.239578 containerd[1492]: time="2025-01-30T13:25:04.239532015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:25:05.459569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475841944.mount: Deactivated successfully. Jan 30 13:25:05.619020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 30 13:25:05.633007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:05.783828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:05.787598 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:05.866075 kubelet[2187]: E0130 13:25:05.865275 2187 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:05.869416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:05.870170 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:06.076933 containerd[1492]: time="2025-01-30T13:25:06.076756154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:06.080103 containerd[1492]: time="2025-01-30T13:25:06.079840036Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 30 13:25:06.086995 containerd[1492]: time="2025-01-30T13:25:06.086859533Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:06.092456 containerd[1492]: time="2025-01-30T13:25:06.092349249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:06.093511 containerd[1492]: time="2025-01-30T13:25:06.093309622Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.853724167s" Jan 30 13:25:06.093511 containerd[1492]: time="2025-01-30T13:25:06.093364263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 13:25:06.129210 containerd[1492]: time="2025-01-30T13:25:06.129023594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:25:06.773604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709942172.mount: Deactivated successfully. Jan 30 13:25:07.663419 containerd[1492]: time="2025-01-30T13:25:07.662202243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:07.665680 containerd[1492]: time="2025-01-30T13:25:07.665614289Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 30 13:25:07.666852 containerd[1492]: time="2025-01-30T13:25:07.666801425Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:07.671134 containerd[1492]: time="2025-01-30T13:25:07.671086363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:07.673546 containerd[1492]: time="2025-01-30T13:25:07.673491715Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.54441916s" Jan 30 13:25:07.673690 containerd[1492]: time="2025-01-30T13:25:07.673675038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 13:25:07.702368 containerd[1492]: time="2025-01-30T13:25:07.702327024Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:25:08.235607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265358068.mount: Deactivated successfully. Jan 30 13:25:08.242861 containerd[1492]: time="2025-01-30T13:25:08.241927070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:08.242861 containerd[1492]: time="2025-01-30T13:25:08.242804801Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 30 13:25:08.244296 containerd[1492]: time="2025-01-30T13:25:08.244230420Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:08.246851 containerd[1492]: time="2025-01-30T13:25:08.246798494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:08.248306 containerd[1492]: time="2025-01-30T13:25:08.248250713Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 545.666925ms" Jan 30 13:25:08.248718 containerd[1492]: time="2025-01-30T13:25:08.248693519Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 13:25:08.274769 containerd[1492]: time="2025-01-30T13:25:08.274726262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:25:08.916701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2830959017.mount: Deactivated successfully. Jan 30 13:25:13.157246 containerd[1492]: time="2025-01-30T13:25:13.156062312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:13.158277 containerd[1492]: time="2025-01-30T13:25:13.158205897Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 30 13:25:13.159270 containerd[1492]: time="2025-01-30T13:25:13.159207589Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:13.163927 containerd[1492]: time="2025-01-30T13:25:13.163823604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:13.166405 containerd[1492]: time="2025-01-30T13:25:13.165877788Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.891101445s" Jan 30 13:25:13.166405 containerd[1492]: time="2025-01-30T13:25:13.165931069Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 13:25:16.120250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 30 13:25:16.130617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:16.284840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:16.288489 (kubelet)[2371]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:25:16.340929 kubelet[2371]: E0130 13:25:16.340775 2371 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:25:16.344525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:25:16.344745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:25:17.983896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:17.998132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:18.025675 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-7.scope)... Jan 30 13:25:18.025692 systemd[1]: Reloading... Jan 30 13:25:18.165551 zram_generator::config[2431]: No configuration found. Jan 30 13:25:18.260396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:25:18.328889 systemd[1]: Reloading finished in 302 ms. Jan 30 13:25:18.392238 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:25:18.392349 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:25:18.393597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:18.400258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:18.558972 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:25:18.559777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:18.617854 kubelet[2473]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:25:18.617854 kubelet[2473]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:25:18.617854 kubelet[2473]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:25:18.618411 kubelet[2473]: I0130 13:25:18.617952 2473 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:25:19.716283 kubelet[2473]: I0130 13:25:19.716199 2473 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:25:19.716283 kubelet[2473]: I0130 13:25:19.716257 2473 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:25:19.717002 kubelet[2473]: I0130 13:25:19.716534 2473 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:25:19.738207 kubelet[2473]: E0130 13:25:19.738173 2473 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.46.226.143:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.740047 kubelet[2473]: I0130 13:25:19.739847 2473 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:25:19.751839 kubelet[2473]: I0130 13:25:19.751355 2473 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:25:19.753346 kubelet[2473]: I0130 13:25:19.753271 2473 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:25:19.753733 kubelet[2473]: I0130 13:25:19.753522 2473 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-1-a1b218d3fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:25:19.753945 kubelet[2473]: I0130 13:25:19.753930 2473 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:25:19.754012 kubelet[2473]: I0130 13:25:19.754002 2473 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:25:19.754415 kubelet[2473]: I0130 13:25:19.754395 2473 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:25:19.755856 kubelet[2473]: I0130 13:25:19.755826 2473 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:25:19.756019 kubelet[2473]: I0130 13:25:19.756007 2473 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:25:19.756306 kubelet[2473]: I0130 13:25:19.756295 2473 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:25:19.757499 kubelet[2473]: I0130 13:25:19.756440 2473 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:25:19.757915 kubelet[2473]: W0130 13:25:19.757869 2473 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.46.226.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-1-a1b218d3fd&limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.758030 kubelet[2473]: E0130 13:25:19.758012 2473 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.46.226.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-1-a1b218d3fd&limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.758164 kubelet[2473]: W0130 13:25:19.758137 2473 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.46.226.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.758254 kubelet[2473]: E0130 13:25:19.758242 2473 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.46.226.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.758977 kubelet[2473]: I0130 13:25:19.758957 2473 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:25:19.759568 kubelet[2473]: I0130 13:25:19.759550 2473 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:25:19.759696 kubelet[2473]: W0130 13:25:19.759683 2473 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:25:19.762114 kubelet[2473]: I0130 13:25:19.762084 2473 server.go:1264] "Started kubelet" Jan 30 13:25:19.768524 kubelet[2473]: E0130 13:25:19.768213 2473 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.46.226.143:6443/api/v1/namespaces/default/events\": dial tcp 78.46.226.143:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-0-1-a1b218d3fd.181f7b45888ab7a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-0-1-a1b218d3fd,UID:ci-4186-1-0-1-a1b218d3fd,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-0-1-a1b218d3fd,},FirstTimestamp:2025-01-30 13:25:19.762053024 +0000 UTC m=+1.198184176,LastTimestamp:2025-01-30 13:25:19.762053024 +0000 UTC m=+1.198184176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-0-1-a1b218d3fd,}" Jan 30 13:25:19.768799 kubelet[2473]: I0130 13:25:19.768736 2473 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:25:19.770010 kubelet[2473]: I0130 13:25:19.769211 2473 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:25:19.770010 kubelet[2473]: I0130 13:25:19.769288 2473 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:25:19.771414 kubelet[2473]: I0130 13:25:19.771379 2473 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:25:19.775097 kubelet[2473]: I0130 13:25:19.771453 2473 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:25:19.776791 kubelet[2473]: I0130 13:25:19.776761 2473 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:25:19.779004 kubelet[2473]: E0130 13:25:19.778923 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.226.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-a1b218d3fd?timeout=10s\": dial tcp 78.46.226.143:6443: connect: connection refused" interval="200ms" Jan 30 13:25:19.780669 kubelet[2473]: W0130 13:25:19.780601 2473 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.46.226.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.780669 kubelet[2473]: E0130 13:25:19.780671 2473 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.46.226.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.781186 kubelet[2473]: I0130 13:25:19.781136 2473 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:25:19.782588 kubelet[2473]: I0130 13:25:19.782567 2473 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:25:19.784273 kubelet[2473]: I0130 13:25:19.784154 2473 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:25:19.785558 kubelet[2473]: I0130 13:25:19.784611 2473 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:25:19.785558 kubelet[2473]: I0130 13:25:19.784626 2473 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:25:19.785558 kubelet[2473]: E0130 13:25:19.782768 2473 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:25:19.798847 kubelet[2473]: I0130 13:25:19.798617 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:25:19.800551 kubelet[2473]: I0130 13:25:19.800457 2473 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:25:19.800824 kubelet[2473]: I0130 13:25:19.800796 2473 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:25:19.800869 kubelet[2473]: I0130 13:25:19.800836 2473 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:25:19.800944 kubelet[2473]: E0130 13:25:19.800889 2473 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:25:19.809567 kubelet[2473]: W0130 13:25:19.809429 2473 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.46.226.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.809567 kubelet[2473]: E0130 13:25:19.809527 2473 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.46.226.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:19.818332 kubelet[2473]: I0130 13:25:19.818297 2473 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:25:19.818332 kubelet[2473]: I0130 13:25:19.818324 2473 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:25:19.818661 kubelet[2473]: I0130 13:25:19.818353 2473 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:25:19.820975 kubelet[2473]: I0130 13:25:19.820947 2473 policy_none.go:49] "None policy: Start" Jan 30 13:25:19.821963 kubelet[2473]: I0130 13:25:19.821942 2473 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:25:19.822053 kubelet[2473]: I0130 13:25:19.821973 2473 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:25:19.829700 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:25:19.842398 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:25:19.847209 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:25:19.862875 kubelet[2473]: I0130 13:25:19.862643 2473 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:25:19.864340 kubelet[2473]: I0130 13:25:19.863057 2473 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:25:19.864340 kubelet[2473]: I0130 13:25:19.863990 2473 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:25:19.866682 kubelet[2473]: E0130 13:25:19.866599 2473 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-0-1-a1b218d3fd\" not found" Jan 30 13:25:19.880940 kubelet[2473]: I0130 13:25:19.880552 2473 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.881248 kubelet[2473]: E0130 13:25:19.881188 2473 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.226.143:6443/api/v1/nodes\": dial tcp 78.46.226.143:6443: connect: connection refused" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.901203 kubelet[2473]: I0130 13:25:19.901100 2473 topology_manager.go:215] "Topology Admit Handler" podUID="4e990b4d2984557577357961259eeb75" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.903791 kubelet[2473]: I0130 13:25:19.903756 2473 topology_manager.go:215] "Topology Admit Handler" podUID="8998c15ccb5a3241e3624f3dd6a48abf" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.906776 kubelet[2473]: I0130 13:25:19.906182 2473 topology_manager.go:215] "Topology Admit Handler" podUID="435804bbfdd12be7e45f08823b5fd569" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.916963 systemd[1]: Created slice kubepods-burstable-pod4e990b4d2984557577357961259eeb75.slice - libcontainer container kubepods-burstable-pod4e990b4d2984557577357961259eeb75.slice. Jan 30 13:25:19.941838 systemd[1]: Created slice kubepods-burstable-pod8998c15ccb5a3241e3624f3dd6a48abf.slice - libcontainer container kubepods-burstable-pod8998c15ccb5a3241e3624f3dd6a48abf.slice. Jan 30 13:25:19.955294 systemd[1]: Created slice kubepods-burstable-pod435804bbfdd12be7e45f08823b5fd569.slice - libcontainer container kubepods-burstable-pod435804bbfdd12be7e45f08823b5fd569.slice. Jan 30 13:25:19.980733 kubelet[2473]: E0130 13:25:19.980562 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.226.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-a1b218d3fd?timeout=10s\": dial tcp 78.46.226.143:6443: connect: connection refused" interval="400ms" Jan 30 13:25:19.986597 kubelet[2473]: I0130 13:25:19.986128 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.986597 kubelet[2473]: I0130 13:25:19.986203 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8998c15ccb5a3241e3624f3dd6a48abf-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-1-a1b218d3fd\" (UID: \"8998c15ccb5a3241e3624f3dd6a48abf\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.986597 kubelet[2473]: I0130 13:25:19.986271 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.986597 kubelet[2473]: I0130 13:25:19.986306 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.986597 kubelet[2473]: I0130 13:25:19.986341 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.986959 kubelet[2473]: I0130 13:25:19.986372 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.986959 kubelet[2473]: I0130 13:25:19.986403 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e990b4d2984557577357961259eeb75-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-1-a1b218d3fd\" (UID: \"4e990b4d2984557577357961259eeb75\") " pod="kube-system/kube-scheduler-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.986959 kubelet[2473]: I0130 13:25:19.986453 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8998c15ccb5a3241e3624f3dd6a48abf-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-1-a1b218d3fd\" (UID: \"8998c15ccb5a3241e3624f3dd6a48abf\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:19.986959 kubelet[2473]: I0130 13:25:19.986518 2473 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8998c15ccb5a3241e3624f3dd6a48abf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-1-a1b218d3fd\" (UID: \"8998c15ccb5a3241e3624f3dd6a48abf\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:20.086133 kubelet[2473]: I0130 13:25:20.086080 2473 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:20.087524 kubelet[2473]: E0130 13:25:20.087478 2473 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.226.143:6443/api/v1/nodes\": dial tcp 78.46.226.143:6443: connect: connection refused" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:20.238365 containerd[1492]: time="2025-01-30T13:25:20.238072767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-1-a1b218d3fd,Uid:4e990b4d2984557577357961259eeb75,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:20.251778 containerd[1492]: time="2025-01-30T13:25:20.251663068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-1-a1b218d3fd,Uid:8998c15ccb5a3241e3624f3dd6a48abf,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:20.263443 containerd[1492]: time="2025-01-30T13:25:20.263033145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-1-a1b218d3fd,Uid:435804bbfdd12be7e45f08823b5fd569,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:20.381973 kubelet[2473]: E0130 13:25:20.381890 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.226.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-a1b218d3fd?timeout=10s\": dial tcp 78.46.226.143:6443: connect: connection refused" interval="800ms" Jan 30 13:25:20.491160 kubelet[2473]: I0130 13:25:20.490748 2473 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:20.491279 kubelet[2473]: E0130 13:25:20.491187 2473 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.226.143:6443/api/v1/nodes\": dial tcp 78.46.226.143:6443: connect: connection refused" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:20.787157 kubelet[2473]: W0130 13:25:20.786444 2473 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.46.226.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:20.787157 kubelet[2473]: E0130 13:25:20.786506 2473 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.46.226.143:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:20.806186 kubelet[2473]: W0130 13:25:20.805374 2473 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.46.226.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-1-a1b218d3fd&limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:20.806186 kubelet[2473]: E0130 13:25:20.805437 2473 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.46.226.143:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-0-1-a1b218d3fd&limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:20.813513 kubelet[2473]: W0130 13:25:20.813219 2473 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.46.226.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:20.813513 kubelet[2473]: E0130 13:25:20.813277 2473 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.46.226.143:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:20.818372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2225678528.mount: Deactivated successfully. Jan 30 13:25:20.830927 containerd[1492]: time="2025-01-30T13:25:20.829858216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:20.832303 containerd[1492]: time="2025-01-30T13:25:20.832163440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 30 13:25:20.837088 containerd[1492]: time="2025-01-30T13:25:20.835577516Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:20.842315 containerd[1492]: time="2025-01-30T13:25:20.840902691Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:20.846705 containerd[1492]: time="2025-01-30T13:25:20.844600569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:20.846705 containerd[1492]: time="2025-01-30T13:25:20.846535669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 608.370421ms" Jan 30 13:25:20.848276 containerd[1492]: time="2025-01-30T13:25:20.848196046Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:25:20.848598 containerd[1492]: time="2025-01-30T13:25:20.848544890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:25:20.854979 containerd[1492]: time="2025-01-30T13:25:20.854651313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:25:20.855693 containerd[1492]: time="2025-01-30T13:25:20.855650883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 592.502257ms" Jan 30 13:25:20.874887 containerd[1492]: time="2025-01-30T13:25:20.874192716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 622.404286ms" Jan 30 13:25:20.980080 containerd[1492]: time="2025-01-30T13:25:20.979875410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:20.980080 containerd[1492]: time="2025-01-30T13:25:20.979984731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:20.980080 containerd[1492]: time="2025-01-30T13:25:20.980002131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:20.980760 containerd[1492]: time="2025-01-30T13:25:20.980554817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:20.983364 containerd[1492]: time="2025-01-30T13:25:20.982407476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:20.983364 containerd[1492]: time="2025-01-30T13:25:20.982501357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:20.983364 containerd[1492]: time="2025-01-30T13:25:20.982512997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:20.983364 containerd[1492]: time="2025-01-30T13:25:20.982603798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:20.983747 containerd[1492]: time="2025-01-30T13:25:20.983045083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:20.984031 containerd[1492]: time="2025-01-30T13:25:20.983612049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:20.984368 containerd[1492]: time="2025-01-30T13:25:20.984164215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:20.985213 containerd[1492]: time="2025-01-30T13:25:20.985054504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:21.007771 systemd[1]: Started cri-containerd-c2b307894f1ba47acf22d3a06a2cfdd2e4ed6e9cf1f724015f4e539d829332dd.scope - libcontainer container c2b307894f1ba47acf22d3a06a2cfdd2e4ed6e9cf1f724015f4e539d829332dd. Jan 30 13:25:21.012929 systemd[1]: Started cri-containerd-2198d7bd1df03a91a9bdf17e0313b3874f656071194696dbd6bc1ac2e0a551e4.scope - libcontainer container 2198d7bd1df03a91a9bdf17e0313b3874f656071194696dbd6bc1ac2e0a551e4. Jan 30 13:25:21.032746 systemd[1]: Started cri-containerd-13dda687d0d2477fdbb0bf4c0c3ca5b0e7fd95dff3918181185fe1d830bf7e2d.scope - libcontainer container 13dda687d0d2477fdbb0bf4c0c3ca5b0e7fd95dff3918181185fe1d830bf7e2d. Jan 30 13:25:21.081705 containerd[1492]: time="2025-01-30T13:25:21.081389286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-0-1-a1b218d3fd,Uid:8998c15ccb5a3241e3624f3dd6a48abf,Namespace:kube-system,Attempt:0,} returns sandbox id \"2198d7bd1df03a91a9bdf17e0313b3874f656071194696dbd6bc1ac2e0a551e4\"" Jan 30 13:25:21.095640 containerd[1492]: time="2025-01-30T13:25:21.094861863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-0-1-a1b218d3fd,Uid:435804bbfdd12be7e45f08823b5fd569,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2b307894f1ba47acf22d3a06a2cfdd2e4ed6e9cf1f724015f4e539d829332dd\"" Jan 30 13:25:21.102301 containerd[1492]: time="2025-01-30T13:25:21.102196098Z" level=info msg="CreateContainer within sandbox \"2198d7bd1df03a91a9bdf17e0313b3874f656071194696dbd6bc1ac2e0a551e4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:25:21.113625 containerd[1492]: time="2025-01-30T13:25:21.112999288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-0-1-a1b218d3fd,Uid:4e990b4d2984557577357961259eeb75,Namespace:kube-system,Attempt:0,} returns sandbox id \"13dda687d0d2477fdbb0bf4c0c3ca5b0e7fd95dff3918181185fe1d830bf7e2d\"" Jan 30 13:25:21.113625 containerd[1492]: time="2025-01-30T13:25:21.113391452Z" level=info msg="CreateContainer within sandbox \"c2b307894f1ba47acf22d3a06a2cfdd2e4ed6e9cf1f724015f4e539d829332dd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:25:21.122887 containerd[1492]: time="2025-01-30T13:25:21.122837468Z" level=info msg="CreateContainer within sandbox \"13dda687d0d2477fdbb0bf4c0c3ca5b0e7fd95dff3918181185fe1d830bf7e2d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:25:21.130876 containerd[1492]: time="2025-01-30T13:25:21.130671067Z" level=info msg="CreateContainer within sandbox \"2198d7bd1df03a91a9bdf17e0313b3874f656071194696dbd6bc1ac2e0a551e4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee8b07f6f466216bdbc0fcaf23a5dd5ef2d5a70273abb7cbec3f6371b2082730\"" Jan 30 13:25:21.132948 containerd[1492]: time="2025-01-30T13:25:21.132883690Z" level=info msg="StartContainer for \"ee8b07f6f466216bdbc0fcaf23a5dd5ef2d5a70273abb7cbec3f6371b2082730\"" Jan 30 13:25:21.135659 containerd[1492]: time="2025-01-30T13:25:21.135613678Z" level=info msg="CreateContainer within sandbox \"c2b307894f1ba47acf22d3a06a2cfdd2e4ed6e9cf1f724015f4e539d829332dd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe89e9b1208483bee1d7374cf6c199703d81046b1afb927f970d4178b3abe1de\"" Jan 30 13:25:21.136783 containerd[1492]: time="2025-01-30T13:25:21.136750569Z" level=info msg="StartContainer for \"fe89e9b1208483bee1d7374cf6c199703d81046b1afb927f970d4178b3abe1de\"" Jan 30 13:25:21.149669 containerd[1492]: time="2025-01-30T13:25:21.149622780Z" level=info msg="CreateContainer within sandbox \"13dda687d0d2477fdbb0bf4c0c3ca5b0e7fd95dff3918181185fe1d830bf7e2d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"920ce56c00c8dc4fb8a5462b4626bbc681faff0e5a2edbe12eab1085b87f20b3\"" Jan 30 13:25:21.151812 containerd[1492]: time="2025-01-30T13:25:21.151773442Z" level=info msg="StartContainer for \"920ce56c00c8dc4fb8a5462b4626bbc681faff0e5a2edbe12eab1085b87f20b3\"" Jan 30 13:25:21.157005 kubelet[2473]: W0130 13:25:21.156927 2473 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.46.226.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:21.157005 kubelet[2473]: E0130 13:25:21.157011 2473 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.46.226.143:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.226.143:6443: connect: connection refused Jan 30 13:25:21.183902 kubelet[2473]: E0130 13:25:21.182801 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.226.143:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-0-1-a1b218d3fd?timeout=10s\": dial tcp 78.46.226.143:6443: connect: connection refused" interval="1.6s" Jan 30 13:25:21.189905 systemd[1]: Started cri-containerd-ee8b07f6f466216bdbc0fcaf23a5dd5ef2d5a70273abb7cbec3f6371b2082730.scope - libcontainer container ee8b07f6f466216bdbc0fcaf23a5dd5ef2d5a70273abb7cbec3f6371b2082730. Jan 30 13:25:21.200997 systemd[1]: Started cri-containerd-fe89e9b1208483bee1d7374cf6c199703d81046b1afb927f970d4178b3abe1de.scope - libcontainer container fe89e9b1208483bee1d7374cf6c199703d81046b1afb927f970d4178b3abe1de. Jan 30 13:25:21.231875 systemd[1]: Started cri-containerd-920ce56c00c8dc4fb8a5462b4626bbc681faff0e5a2edbe12eab1085b87f20b3.scope - libcontainer container 920ce56c00c8dc4fb8a5462b4626bbc681faff0e5a2edbe12eab1085b87f20b3. Jan 30 13:25:21.286113 containerd[1492]: time="2025-01-30T13:25:21.285692323Z" level=info msg="StartContainer for \"ee8b07f6f466216bdbc0fcaf23a5dd5ef2d5a70273abb7cbec3f6371b2082730\" returns successfully" Jan 30 13:25:21.296111 kubelet[2473]: I0130 13:25:21.295946 2473 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:21.300779 kubelet[2473]: E0130 13:25:21.300209 2473 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.226.143:6443/api/v1/nodes\": dial tcp 78.46.226.143:6443: connect: connection refused" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:21.306427 containerd[1492]: time="2025-01-30T13:25:21.305891769Z" level=info msg="StartContainer for \"fe89e9b1208483bee1d7374cf6c199703d81046b1afb927f970d4178b3abe1de\" returns successfully" Jan 30 13:25:21.323522 containerd[1492]: time="2025-01-30T13:25:21.323425347Z" level=info msg="StartContainer for \"920ce56c00c8dc4fb8a5462b4626bbc681faff0e5a2edbe12eab1085b87f20b3\" returns successfully" Jan 30 13:25:22.902394 kubelet[2473]: I0130 13:25:22.902357 2473 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:23.773855 kubelet[2473]: E0130 13:25:23.773804 2473 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-0-1-a1b218d3fd\" not found" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:23.874080 kubelet[2473]: I0130 13:25:23.874028 2473 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:24.759767 kubelet[2473]: I0130 13:25:24.759706 2473 apiserver.go:52] "Watching apiserver" Jan 30 13:25:24.783933 kubelet[2473]: I0130 13:25:24.783517 2473 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:25:26.017502 systemd[1]: Reloading requested from client PID 2745 ('systemctl') (unit session-7.scope)... Jan 30 13:25:26.017526 systemd[1]: Reloading... Jan 30 13:25:26.110624 zram_generator::config[2788]: No configuration found. Jan 30 13:25:26.222805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:25:26.308961 systemd[1]: Reloading finished in 290 ms. Jan 30 13:25:26.354375 kubelet[2473]: I0130 13:25:26.354233 2473 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:25:26.354650 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:26.369667 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:25:26.369949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:26.370016 systemd[1]: kubelet.service: Consumed 1.671s CPU time, 113.6M memory peak, 0B memory swap peak. Jan 30 13:25:26.375887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:25:26.522831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:25:26.534434 (kubelet)[2830]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:25:26.604194 kubelet[2830]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:25:26.604194 kubelet[2830]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:25:26.604194 kubelet[2830]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:25:26.604194 kubelet[2830]: I0130 13:25:26.603547 2830 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:25:26.611762 kubelet[2830]: I0130 13:25:26.610550 2830 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:25:26.611762 kubelet[2830]: I0130 13:25:26.610580 2830 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:25:26.611762 kubelet[2830]: I0130 13:25:26.610792 2830 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:25:26.613138 kubelet[2830]: I0130 13:25:26.613107 2830 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:25:26.615095 kubelet[2830]: I0130 13:25:26.615044 2830 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:25:26.628069 kubelet[2830]: I0130 13:25:26.628032 2830 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:25:26.628539 kubelet[2830]: I0130 13:25:26.628493 2830 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:25:26.628859 kubelet[2830]: I0130 13:25:26.628621 2830 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-0-1-a1b218d3fd","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:25:26.628994 kubelet[2830]: I0130 13:25:26.628980 2830 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:25:26.629049 kubelet[2830]: I0130 13:25:26.629041 2830 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:25:26.629151 kubelet[2830]: I0130 13:25:26.629140 2830 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:25:26.629416 kubelet[2830]: I0130 13:25:26.629399 2830 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:25:26.629558 kubelet[2830]: I0130 13:25:26.629546 2830 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:25:26.629663 kubelet[2830]: I0130 13:25:26.629653 2830 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:25:26.629733 kubelet[2830]: I0130 13:25:26.629723 2830 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:25:26.632015 kubelet[2830]: I0130 13:25:26.631964 2830 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:25:26.632203 kubelet[2830]: I0130 13:25:26.632188 2830 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:25:26.632712 kubelet[2830]: I0130 13:25:26.632687 2830 server.go:1264] "Started kubelet" Jan 30 13:25:26.635309 kubelet[2830]: I0130 13:25:26.635274 2830 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:25:26.647713 kubelet[2830]: I0130 13:25:26.647635 2830 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:25:26.648898 kubelet[2830]: I0130 13:25:26.648845 2830 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:25:26.652714 kubelet[2830]: I0130 13:25:26.650952 2830 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:25:26.652714 kubelet[2830]: I0130 13:25:26.651233 2830 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:25:26.655781 kubelet[2830]: I0130 13:25:26.655721 2830 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:25:26.656987 kubelet[2830]: I0130 13:25:26.656941 2830 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:25:26.657359 kubelet[2830]: I0130 13:25:26.657342 2830 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:25:26.668573 kubelet[2830]: I0130 13:25:26.668517 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:25:26.673318 kubelet[2830]: I0130 13:25:26.673278 2830 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:25:26.673514 kubelet[2830]: I0130 13:25:26.673489 2830 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:25:26.673607 kubelet[2830]: I0130 13:25:26.673598 2830 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:25:26.673731 kubelet[2830]: E0130 13:25:26.673699 2830 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:25:26.684018 kubelet[2830]: I0130 13:25:26.683963 2830 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:25:26.684176 kubelet[2830]: I0130 13:25:26.684078 2830 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:25:26.689107 kubelet[2830]: I0130 13:25:26.689047 2830 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:25:26.693540 kubelet[2830]: E0130 13:25:26.693509 2830 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:25:26.757635 kubelet[2830]: I0130 13:25:26.757600 2830 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:25:26.757812 kubelet[2830]: I0130 13:25:26.757798 2830 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:25:26.757878 kubelet[2830]: I0130 13:25:26.757869 2830 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:25:26.758593 kubelet[2830]: I0130 13:25:26.758123 2830 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:25:26.758593 kubelet[2830]: I0130 13:25:26.758142 2830 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:25:26.758593 kubelet[2830]: I0130 13:25:26.758168 2830 policy_none.go:49] "None policy: Start" Jan 30 13:25:26.759525 kubelet[2830]: I0130 13:25:26.759326 2830 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:25:26.759525 kubelet[2830]: I0130 13:25:26.759359 2830 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:25:26.760457 kubelet[2830]: I0130 13:25:26.759717 2830 state_mem.go:75] "Updated machine memory state" Jan 30 13:25:26.761627 kubelet[2830]: I0130 13:25:26.761581 2830 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.768251 kubelet[2830]: I0130 13:25:26.768214 2830 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:25:26.768673 kubelet[2830]: I0130 13:25:26.768625 2830 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:25:26.768867 kubelet[2830]: I0130 13:25:26.768846 2830 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:25:26.774105 kubelet[2830]: I0130 13:25:26.774065 2830 topology_manager.go:215] "Topology Admit Handler" podUID="4e990b4d2984557577357961259eeb75" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.775644 kubelet[2830]: I0130 13:25:26.774406 2830 topology_manager.go:215] "Topology Admit Handler" podUID="8998c15ccb5a3241e3624f3dd6a48abf" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.775644 kubelet[2830]: I0130 13:25:26.774462 2830 topology_manager.go:215] "Topology Admit Handler" podUID="435804bbfdd12be7e45f08823b5fd569" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.784891 kubelet[2830]: I0130 13:25:26.784854 2830 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.785053 kubelet[2830]: I0130 13:25:26.784963 2830 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.794951 kubelet[2830]: E0130 13:25:26.794912 2830 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186-1-0-1-a1b218d3fd\" already exists" pod="kube-system/kube-scheduler-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.860590 kubelet[2830]: I0130 13:25:26.859494 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-ca-certs\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.861273 kubelet[2830]: I0130 13:25:26.860872 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.861273 kubelet[2830]: I0130 13:25:26.860932 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.861273 kubelet[2830]: I0130 13:25:26.860973 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.861273 kubelet[2830]: I0130 13:25:26.861011 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8998c15ccb5a3241e3624f3dd6a48abf-ca-certs\") pod \"kube-apiserver-ci-4186-1-0-1-a1b218d3fd\" (UID: \"8998c15ccb5a3241e3624f3dd6a48abf\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.861273 kubelet[2830]: I0130 13:25:26.861044 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8998c15ccb5a3241e3624f3dd6a48abf-k8s-certs\") pod \"kube-apiserver-ci-4186-1-0-1-a1b218d3fd\" (UID: \"8998c15ccb5a3241e3624f3dd6a48abf\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.861620 kubelet[2830]: I0130 13:25:26.861076 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/435804bbfdd12be7e45f08823b5fd569-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-0-1-a1b218d3fd\" (UID: \"435804bbfdd12be7e45f08823b5fd569\") " pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.861620 kubelet[2830]: I0130 13:25:26.861109 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e990b4d2984557577357961259eeb75-kubeconfig\") pod \"kube-scheduler-ci-4186-1-0-1-a1b218d3fd\" (UID: \"4e990b4d2984557577357961259eeb75\") " pod="kube-system/kube-scheduler-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:26.861620 kubelet[2830]: I0130 13:25:26.861140 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8998c15ccb5a3241e3624f3dd6a48abf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-0-1-a1b218d3fd\" (UID: \"8998c15ccb5a3241e3624f3dd6a48abf\") " pod="kube-system/kube-apiserver-ci-4186-1-0-1-a1b218d3fd" Jan 30 13:25:27.024206 sudo[2862]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 30 13:25:27.026036 sudo[2862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 30 13:25:27.496343 sudo[2862]: pam_unix(sudo:session): session closed for user root Jan 30 13:25:27.630959 kubelet[2830]: I0130 13:25:27.630904 2830 apiserver.go:52] "Watching apiserver" Jan 30 13:25:27.658059 kubelet[2830]: I0130 13:25:27.658022 2830 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:25:27.771765 kubelet[2830]: I0130 13:25:27.771378 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-0-1-a1b218d3fd" podStartSLOduration=1.7713561260000001 podStartE2EDuration="1.771356126s" podCreationTimestamp="2025-01-30 13:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:27.770531279 +0000 UTC m=+1.227637494" watchObservedRunningTime="2025-01-30 13:25:27.771356126 +0000 UTC m=+1.228462261" Jan 30 13:25:27.802199 kubelet[2830]: I0130 13:25:27.801659 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-0-1-a1b218d3fd" podStartSLOduration=2.801625923 podStartE2EDuration="2.801625923s" podCreationTimestamp="2025-01-30 13:25:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:27.785346094 +0000 UTC m=+1.242452269" watchObservedRunningTime="2025-01-30 13:25:27.801625923 +0000 UTC m=+1.258732098" Jan 30 13:25:29.918035 sudo[1867]: pam_unix(sudo:session): session closed for user root Jan 30 13:25:30.076907 sshd[1866]: Connection closed by 139.178.68.195 port 40524 Jan 30 13:25:30.077993 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Jan 30 13:25:30.084138 systemd-logind[1463]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:25:30.084535 systemd[1]: sshd@6-78.46.226.143:22-139.178.68.195:40524.service: Deactivated successfully. Jan 30 13:25:30.086907 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:25:30.087103 systemd[1]: session-7.scope: Consumed 7.622s CPU time, 189.4M memory peak, 0B memory swap peak. Jan 30 13:25:30.089573 systemd-logind[1463]: Removed session 7. Jan 30 13:25:35.616169 kubelet[2830]: I0130 13:25:35.615649 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-0-1-a1b218d3fd" podStartSLOduration=9.615607437 podStartE2EDuration="9.615607437s" podCreationTimestamp="2025-01-30 13:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:27.803233657 +0000 UTC m=+1.260339872" watchObservedRunningTime="2025-01-30 13:25:35.615607437 +0000 UTC m=+9.072713572" Jan 30 13:25:41.853644 kubelet[2830]: I0130 13:25:41.853592 2830 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:25:41.858541 containerd[1492]: time="2025-01-30T13:25:41.855406783Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:25:41.859007 kubelet[2830]: I0130 13:25:41.856082 2830 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:25:42.762771 kubelet[2830]: I0130 13:25:42.762706 2830 topology_manager.go:215] "Topology Admit Handler" podUID="066a5e39-2c4f-4b34-bc64-b0e1716b9b0d" podNamespace="kube-system" podName="kube-proxy-hmkfk" Jan 30 13:25:42.774230 kubelet[2830]: I0130 13:25:42.773917 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/066a5e39-2c4f-4b34-bc64-b0e1716b9b0d-kube-proxy\") pod \"kube-proxy-hmkfk\" (UID: \"066a5e39-2c4f-4b34-bc64-b0e1716b9b0d\") " pod="kube-system/kube-proxy-hmkfk" Jan 30 13:25:42.774230 kubelet[2830]: I0130 13:25:42.773956 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/066a5e39-2c4f-4b34-bc64-b0e1716b9b0d-xtables-lock\") pod \"kube-proxy-hmkfk\" (UID: \"066a5e39-2c4f-4b34-bc64-b0e1716b9b0d\") " pod="kube-system/kube-proxy-hmkfk" Jan 30 13:25:42.774230 kubelet[2830]: I0130 13:25:42.773976 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/066a5e39-2c4f-4b34-bc64-b0e1716b9b0d-lib-modules\") pod \"kube-proxy-hmkfk\" (UID: \"066a5e39-2c4f-4b34-bc64-b0e1716b9b0d\") " pod="kube-system/kube-proxy-hmkfk" Jan 30 13:25:42.774230 kubelet[2830]: I0130 13:25:42.773993 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-945b5\" (UniqueName: \"kubernetes.io/projected/066a5e39-2c4f-4b34-bc64-b0e1716b9b0d-kube-api-access-945b5\") pod \"kube-proxy-hmkfk\" (UID: \"066a5e39-2c4f-4b34-bc64-b0e1716b9b0d\") " pod="kube-system/kube-proxy-hmkfk" Jan 30 13:25:42.779445 systemd[1]: Created slice kubepods-besteffort-pod066a5e39_2c4f_4b34_bc64_b0e1716b9b0d.slice - libcontainer container kubepods-besteffort-pod066a5e39_2c4f_4b34_bc64_b0e1716b9b0d.slice. Jan 30 13:25:42.795042 kubelet[2830]: I0130 13:25:42.794970 2830 topology_manager.go:215] "Topology Admit Handler" podUID="250d3552-7dbe-45aa-926c-9aed6f589159" podNamespace="kube-system" podName="cilium-zw8fc" Jan 30 13:25:42.807623 systemd[1]: Created slice kubepods-burstable-pod250d3552_7dbe_45aa_926c_9aed6f589159.slice - libcontainer container kubepods-burstable-pod250d3552_7dbe_45aa_926c_9aed6f589159.slice. Jan 30 13:25:42.874822 kubelet[2830]: I0130 13:25:42.874767 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-hostproc\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875236 kubelet[2830]: I0130 13:25:42.874839 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-cgroup\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875236 kubelet[2830]: I0130 13:25:42.874876 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-etc-cni-netd\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875236 kubelet[2830]: I0130 13:25:42.874907 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250d3552-7dbe-45aa-926c-9aed6f589159-hubble-tls\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875236 kubelet[2830]: I0130 13:25:42.874964 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t2d6\" (UniqueName: \"kubernetes.io/projected/250d3552-7dbe-45aa-926c-9aed6f589159-kube-api-access-4t2d6\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875236 kubelet[2830]: I0130 13:25:42.875031 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-host-proc-sys-net\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875236 kubelet[2830]: I0130 13:25:42.875060 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-bpf-maps\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875395 kubelet[2830]: I0130 13:25:42.875094 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cni-path\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875395 kubelet[2830]: I0130 13:25:42.875139 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-xtables-lock\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875395 kubelet[2830]: I0130 13:25:42.875188 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-config-path\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875395 kubelet[2830]: I0130 13:25:42.875234 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-run\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875395 kubelet[2830]: I0130 13:25:42.875321 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250d3552-7dbe-45aa-926c-9aed6f589159-clustermesh-secrets\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875395 kubelet[2830]: I0130 13:25:42.875358 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-lib-modules\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.875730 kubelet[2830]: I0130 13:25:42.875394 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-host-proc-sys-kernel\") pod \"cilium-zw8fc\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " pod="kube-system/cilium-zw8fc" Jan 30 13:25:42.956328 kubelet[2830]: I0130 13:25:42.955167 2830 topology_manager.go:215] "Topology Admit Handler" podUID="4cef61b5-a560-4b0f-b7cb-c966ab39c322" podNamespace="kube-system" podName="cilium-operator-599987898-fldkb" Jan 30 13:25:42.966100 systemd[1]: Created slice kubepods-besteffort-pod4cef61b5_a560_4b0f_b7cb_c966ab39c322.slice - libcontainer container kubepods-besteffort-pod4cef61b5_a560_4b0f_b7cb_c966ab39c322.slice. Jan 30 13:25:42.976592 kubelet[2830]: I0130 13:25:42.976545 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cef61b5-a560-4b0f-b7cb-c966ab39c322-cilium-config-path\") pod \"cilium-operator-599987898-fldkb\" (UID: \"4cef61b5-a560-4b0f-b7cb-c966ab39c322\") " pod="kube-system/cilium-operator-599987898-fldkb" Jan 30 13:25:42.976870 kubelet[2830]: I0130 13:25:42.976824 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjkm9\" (UniqueName: \"kubernetes.io/projected/4cef61b5-a560-4b0f-b7cb-c966ab39c322-kube-api-access-kjkm9\") pod \"cilium-operator-599987898-fldkb\" (UID: \"4cef61b5-a560-4b0f-b7cb-c966ab39c322\") " pod="kube-system/cilium-operator-599987898-fldkb" Jan 30 13:25:43.094698 containerd[1492]: time="2025-01-30T13:25:43.093395967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmkfk,Uid:066a5e39-2c4f-4b34-bc64-b0e1716b9b0d,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:43.116600 containerd[1492]: time="2025-01-30T13:25:43.116365692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zw8fc,Uid:250d3552-7dbe-45aa-926c-9aed6f589159,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:43.122612 containerd[1492]: time="2025-01-30T13:25:43.122039653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:43.122612 containerd[1492]: time="2025-01-30T13:25:43.122124613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:43.122612 containerd[1492]: time="2025-01-30T13:25:43.122140253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:43.122612 containerd[1492]: time="2025-01-30T13:25:43.122241894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:43.147794 systemd[1]: Started cri-containerd-32104d8731104cc708364926c4c7327acfe314cc050c787a8a83e4dc1a1671c1.scope - libcontainer container 32104d8731104cc708364926c4c7327acfe314cc050c787a8a83e4dc1a1671c1. Jan 30 13:25:43.166378 containerd[1492]: time="2025-01-30T13:25:43.165317724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:43.166378 containerd[1492]: time="2025-01-30T13:25:43.165395964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:43.166378 containerd[1492]: time="2025-01-30T13:25:43.165414285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:43.166378 containerd[1492]: time="2025-01-30T13:25:43.165539965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:43.187023 containerd[1492]: time="2025-01-30T13:25:43.186857279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hmkfk,Uid:066a5e39-2c4f-4b34-bc64-b0e1716b9b0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"32104d8731104cc708364926c4c7327acfe314cc050c787a8a83e4dc1a1671c1\"" Jan 30 13:25:43.203726 systemd[1]: Started cri-containerd-3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72.scope - libcontainer container 3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72. Jan 30 13:25:43.204536 containerd[1492]: time="2025-01-30T13:25:43.204493965Z" level=info msg="CreateContainer within sandbox \"32104d8731104cc708364926c4c7327acfe314cc050c787a8a83e4dc1a1671c1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:25:43.240791 containerd[1492]: time="2025-01-30T13:25:43.240496784Z" level=info msg="CreateContainer within sandbox \"32104d8731104cc708364926c4c7327acfe314cc050c787a8a83e4dc1a1671c1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26198c320db92f5e751d7872d6bed197237d22b850ee50b76e330e58392ac21a\"" Jan 30 13:25:43.244348 containerd[1492]: time="2025-01-30T13:25:43.243128403Z" level=info msg="StartContainer for \"26198c320db92f5e751d7872d6bed197237d22b850ee50b76e330e58392ac21a\"" Jan 30 13:25:43.247871 containerd[1492]: time="2025-01-30T13:25:43.247823517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zw8fc,Uid:250d3552-7dbe-45aa-926c-9aed6f589159,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\"" Jan 30 13:25:43.254450 containerd[1492]: time="2025-01-30T13:25:43.253545078Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:25:43.274664 containerd[1492]: time="2025-01-30T13:25:43.274416668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fldkb,Uid:4cef61b5-a560-4b0f-b7cb-c966ab39c322,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:43.280899 systemd[1]: Started cri-containerd-26198c320db92f5e751d7872d6bed197237d22b850ee50b76e330e58392ac21a.scope - libcontainer container 26198c320db92f5e751d7872d6bed197237d22b850ee50b76e330e58392ac21a. Jan 30 13:25:43.324379 containerd[1492]: time="2025-01-30T13:25:43.324253187Z" level=info msg="StartContainer for \"26198c320db92f5e751d7872d6bed197237d22b850ee50b76e330e58392ac21a\" returns successfully" Jan 30 13:25:43.334639 containerd[1492]: time="2025-01-30T13:25:43.334164338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:25:43.334639 containerd[1492]: time="2025-01-30T13:25:43.334343979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:25:43.334639 containerd[1492]: time="2025-01-30T13:25:43.334357139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:43.334639 containerd[1492]: time="2025-01-30T13:25:43.334490060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:25:43.357795 systemd[1]: Started cri-containerd-adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98.scope - libcontainer container adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98. Jan 30 13:25:43.401369 containerd[1492]: time="2025-01-30T13:25:43.401146299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-fldkb,Uid:4cef61b5-a560-4b0f-b7cb-c966ab39c322,Namespace:kube-system,Attempt:0,} returns sandbox id \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\"" Jan 30 13:25:43.798677 kubelet[2830]: I0130 13:25:43.798582 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hmkfk" podStartSLOduration=1.7985557170000002 podStartE2EDuration="1.798555717s" podCreationTimestamp="2025-01-30 13:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:25:43.798218954 +0000 UTC m=+17.255325129" watchObservedRunningTime="2025-01-30 13:25:43.798555717 +0000 UTC m=+17.255661852" Jan 30 13:25:48.362791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519783044.mount: Deactivated successfully. Jan 30 13:25:49.689809 containerd[1492]: time="2025-01-30T13:25:49.689717155Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:49.692179 containerd[1492]: time="2025-01-30T13:25:49.692118011Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:25:49.693857 containerd[1492]: time="2025-01-30T13:25:49.693798583Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:49.696167 containerd[1492]: time="2025-01-30T13:25:49.696038637Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.441570432s" Jan 30 13:25:49.696167 containerd[1492]: time="2025-01-30T13:25:49.696096478Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:25:49.698852 containerd[1492]: time="2025-01-30T13:25:49.698337853Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:25:49.701371 containerd[1492]: time="2025-01-30T13:25:49.701058511Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:25:49.723868 containerd[1492]: time="2025-01-30T13:25:49.723726262Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\"" Jan 30 13:25:49.723868 containerd[1492]: time="2025-01-30T13:25:49.724415907Z" level=info msg="StartContainer for \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\"" Jan 30 13:25:49.763655 systemd[1]: run-containerd-runc-k8s.io-0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0-runc.fgwM4r.mount: Deactivated successfully. Jan 30 13:25:49.772229 systemd[1]: Started cri-containerd-0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0.scope - libcontainer container 0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0. Jan 30 13:25:49.829144 containerd[1492]: time="2025-01-30T13:25:49.828189680Z" level=info msg="StartContainer for \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\" returns successfully" Jan 30 13:25:49.849109 systemd[1]: cri-containerd-0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0.scope: Deactivated successfully. Jan 30 13:25:50.088174 containerd[1492]: time="2025-01-30T13:25:50.088010369Z" level=info msg="shim disconnected" id=0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0 namespace=k8s.io Jan 30 13:25:50.088174 containerd[1492]: time="2025-01-30T13:25:50.088079009Z" level=warning msg="cleaning up after shim disconnected" id=0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0 namespace=k8s.io Jan 30 13:25:50.088174 containerd[1492]: time="2025-01-30T13:25:50.088091169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:25:50.712817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0-rootfs.mount: Deactivated successfully. Jan 30 13:25:50.820970 containerd[1492]: time="2025-01-30T13:25:50.820906808Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:25:50.842613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020451718.mount: Deactivated successfully. Jan 30 13:25:50.855986 containerd[1492]: time="2025-01-30T13:25:50.855811878Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\"" Jan 30 13:25:50.858240 containerd[1492]: time="2025-01-30T13:25:50.858196174Z" level=info msg="StartContainer for \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\"" Jan 30 13:25:50.898751 systemd[1]: Started cri-containerd-8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b.scope - libcontainer container 8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b. Jan 30 13:25:50.938186 containerd[1492]: time="2025-01-30T13:25:50.938120662Z" level=info msg="StartContainer for \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\" returns successfully" Jan 30 13:25:50.954701 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:25:50.954926 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:25:50.955009 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:25:50.963584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:25:50.963891 systemd[1]: cri-containerd-8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b.scope: Deactivated successfully. Jan 30 13:25:50.992629 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:25:51.007429 containerd[1492]: time="2025-01-30T13:25:51.007347518Z" level=info msg="shim disconnected" id=8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b namespace=k8s.io Jan 30 13:25:51.007429 containerd[1492]: time="2025-01-30T13:25:51.007418959Z" level=warning msg="cleaning up after shim disconnected" id=8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b namespace=k8s.io Jan 30 13:25:51.007429 containerd[1492]: time="2025-01-30T13:25:51.007431399Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:25:51.021151 containerd[1492]: time="2025-01-30T13:25:51.021077608Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:25:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:25:51.711602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b-rootfs.mount: Deactivated successfully. Jan 30 13:25:51.837682 containerd[1492]: time="2025-01-30T13:25:51.835016882Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:25:51.896256 containerd[1492]: time="2025-01-30T13:25:51.896108961Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\"" Jan 30 13:25:51.897758 containerd[1492]: time="2025-01-30T13:25:51.897526410Z" level=info msg="StartContainer for \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\"" Jan 30 13:25:51.947744 systemd[1]: Started cri-containerd-db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8.scope - libcontainer container db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8. Jan 30 13:25:52.000639 systemd[1]: cri-containerd-db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8.scope: Deactivated successfully. Jan 30 13:25:52.004548 containerd[1492]: time="2025-01-30T13:25:52.004238187Z" level=info msg="StartContainer for \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\" returns successfully" Jan 30 13:25:52.066574 containerd[1492]: time="2025-01-30T13:25:52.066186587Z" level=info msg="shim disconnected" id=db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8 namespace=k8s.io Jan 30 13:25:52.066574 containerd[1492]: time="2025-01-30T13:25:52.066316708Z" level=warning msg="cleaning up after shim disconnected" id=db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8 namespace=k8s.io Jan 30 13:25:52.066574 containerd[1492]: time="2025-01-30T13:25:52.066327628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:25:52.350124 containerd[1492]: time="2025-01-30T13:25:52.348906612Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:52.351708 containerd[1492]: time="2025-01-30T13:25:52.351644270Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:25:52.353175 containerd[1492]: time="2025-01-30T13:25:52.353125440Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:25:52.355282 containerd[1492]: time="2025-01-30T13:25:52.355222573Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.65681644s" Jan 30 13:25:52.355454 containerd[1492]: time="2025-01-30T13:25:52.355436975Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:25:52.361887 containerd[1492]: time="2025-01-30T13:25:52.361832256Z" level=info msg="CreateContainer within sandbox \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:25:52.378324 containerd[1492]: time="2025-01-30T13:25:52.378227042Z" level=info msg="CreateContainer within sandbox \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\"" Jan 30 13:25:52.379668 containerd[1492]: time="2025-01-30T13:25:52.379141808Z" level=info msg="StartContainer for \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\"" Jan 30 13:25:52.419826 systemd[1]: Started cri-containerd-cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735.scope - libcontainer container cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735. Jan 30 13:25:52.458874 containerd[1492]: time="2025-01-30T13:25:52.458820962Z" level=info msg="StartContainer for \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\" returns successfully" Jan 30 13:25:52.715857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8-rootfs.mount: Deactivated successfully. Jan 30 13:25:52.859576 containerd[1492]: time="2025-01-30T13:25:52.857881739Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:25:52.893650 containerd[1492]: time="2025-01-30T13:25:52.893589810Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\"" Jan 30 13:25:52.896745 containerd[1492]: time="2025-01-30T13:25:52.895752464Z" level=info msg="StartContainer for \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\"" Jan 30 13:25:52.902853 kubelet[2830]: I0130 13:25:52.902659 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-fldkb" podStartSLOduration=1.9489215579999999 podStartE2EDuration="10.902638268s" podCreationTimestamp="2025-01-30 13:25:42 +0000 UTC" firstStartedPulling="2025-01-30 13:25:43.403178474 +0000 UTC m=+16.860284609" lastFinishedPulling="2025-01-30 13:25:52.356895184 +0000 UTC m=+25.814001319" observedRunningTime="2025-01-30 13:25:52.854416197 +0000 UTC m=+26.311522332" watchObservedRunningTime="2025-01-30 13:25:52.902638268 +0000 UTC m=+26.359744403" Jan 30 13:25:52.954874 systemd[1]: Started cri-containerd-55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14.scope - libcontainer container 55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14. Jan 30 13:25:53.007072 systemd[1]: cri-containerd-55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14.scope: Deactivated successfully. Jan 30 13:25:53.012797 containerd[1492]: time="2025-01-30T13:25:53.012748858Z" level=info msg="StartContainer for \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\" returns successfully" Jan 30 13:25:53.093324 containerd[1492]: time="2025-01-30T13:25:53.093196292Z" level=info msg="shim disconnected" id=55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14 namespace=k8s.io Jan 30 13:25:53.093822 containerd[1492]: time="2025-01-30T13:25:53.093345173Z" level=warning msg="cleaning up after shim disconnected" id=55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14 namespace=k8s.io Jan 30 13:25:53.093822 containerd[1492]: time="2025-01-30T13:25:53.093358213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:25:53.712320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14-rootfs.mount: Deactivated successfully. Jan 30 13:25:53.868920 containerd[1492]: time="2025-01-30T13:25:53.868858647Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:25:53.913326 containerd[1492]: time="2025-01-30T13:25:53.913042730Z" level=info msg="CreateContainer within sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\"" Jan 30 13:25:53.914765 containerd[1492]: time="2025-01-30T13:25:53.914655260Z" level=info msg="StartContainer for \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\"" Jan 30 13:25:53.950787 systemd[1]: Started cri-containerd-b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf.scope - libcontainer container b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf. Jan 30 13:25:53.995169 containerd[1492]: time="2025-01-30T13:25:53.992121915Z" level=info msg="StartContainer for \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\" returns successfully" Jan 30 13:25:54.113874 kubelet[2830]: I0130 13:25:54.113831 2830 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:25:54.149985 kubelet[2830]: I0130 13:25:54.149862 2830 topology_manager.go:215] "Topology Admit Handler" podUID="85ed8820-dcbe-4c11-bbfd-359aded16a92" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zxtjh" Jan 30 13:25:54.152907 kubelet[2830]: I0130 13:25:54.152797 2830 topology_manager.go:215] "Topology Admit Handler" podUID="6e40a956-f7df-4fe7-912b-c3a7dad397ca" podNamespace="kube-system" podName="coredns-7db6d8ff4d-prdmx" Jan 30 13:25:54.164237 systemd[1]: Created slice kubepods-burstable-pod6e40a956_f7df_4fe7_912b_c3a7dad397ca.slice - libcontainer container kubepods-burstable-pod6e40a956_f7df_4fe7_912b_c3a7dad397ca.slice. Jan 30 13:25:54.181055 systemd[1]: Created slice kubepods-burstable-pod85ed8820_dcbe_4c11_bbfd_359aded16a92.slice - libcontainer container kubepods-burstable-pod85ed8820_dcbe_4c11_bbfd_359aded16a92.slice. Jan 30 13:25:54.259902 kubelet[2830]: I0130 13:25:54.259774 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e40a956-f7df-4fe7-912b-c3a7dad397ca-config-volume\") pod \"coredns-7db6d8ff4d-prdmx\" (UID: \"6e40a956-f7df-4fe7-912b-c3a7dad397ca\") " pod="kube-system/coredns-7db6d8ff4d-prdmx" Jan 30 13:25:54.260138 kubelet[2830]: I0130 13:25:54.260120 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xhq6\" (UniqueName: \"kubernetes.io/projected/6e40a956-f7df-4fe7-912b-c3a7dad397ca-kube-api-access-8xhq6\") pod \"coredns-7db6d8ff4d-prdmx\" (UID: \"6e40a956-f7df-4fe7-912b-c3a7dad397ca\") " pod="kube-system/coredns-7db6d8ff4d-prdmx" Jan 30 13:25:54.260242 kubelet[2830]: I0130 13:25:54.260230 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85ed8820-dcbe-4c11-bbfd-359aded16a92-config-volume\") pod \"coredns-7db6d8ff4d-zxtjh\" (UID: \"85ed8820-dcbe-4c11-bbfd-359aded16a92\") " pod="kube-system/coredns-7db6d8ff4d-zxtjh" Jan 30 13:25:54.260615 kubelet[2830]: I0130 13:25:54.260338 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9v9f\" (UniqueName: \"kubernetes.io/projected/85ed8820-dcbe-4c11-bbfd-359aded16a92-kube-api-access-j9v9f\") pod \"coredns-7db6d8ff4d-zxtjh\" (UID: \"85ed8820-dcbe-4c11-bbfd-359aded16a92\") " pod="kube-system/coredns-7db6d8ff4d-zxtjh" Jan 30 13:25:54.481060 containerd[1492]: time="2025-01-30T13:25:54.480982086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-prdmx,Uid:6e40a956-f7df-4fe7-912b-c3a7dad397ca,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:54.489652 containerd[1492]: time="2025-01-30T13:25:54.489572020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zxtjh,Uid:85ed8820-dcbe-4c11-bbfd-359aded16a92,Namespace:kube-system,Attempt:0,}" Jan 30 13:25:54.886146 kubelet[2830]: I0130 13:25:54.885825 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zw8fc" podStartSLOduration=6.439395019 podStartE2EDuration="12.885806005s" podCreationTimestamp="2025-01-30 13:25:42 +0000 UTC" firstStartedPulling="2025-01-30 13:25:43.251092101 +0000 UTC m=+16.708198196" lastFinishedPulling="2025-01-30 13:25:49.697503047 +0000 UTC m=+23.154609182" observedRunningTime="2025-01-30 13:25:54.885141721 +0000 UTC m=+28.342247856" watchObservedRunningTime="2025-01-30 13:25:54.885806005 +0000 UTC m=+28.342912180" Jan 30 13:25:56.391449 systemd-networkd[1386]: cilium_host: Link UP Jan 30 13:25:56.394450 systemd-networkd[1386]: cilium_net: Link UP Jan 30 13:25:56.395254 systemd-networkd[1386]: cilium_net: Gained carrier Jan 30 13:25:56.397166 systemd-networkd[1386]: cilium_host: Gained carrier Jan 30 13:25:56.539587 systemd-networkd[1386]: cilium_vxlan: Link UP Jan 30 13:25:56.539944 systemd-networkd[1386]: cilium_vxlan: Gained carrier Jan 30 13:25:56.722671 systemd-networkd[1386]: cilium_net: Gained IPv6LL Jan 30 13:25:56.895785 kernel: NET: Registered PF_ALG protocol family Jan 30 13:25:57.283800 systemd-networkd[1386]: cilium_host: Gained IPv6LL Jan 30 13:25:57.751720 systemd-networkd[1386]: lxc_health: Link UP Jan 30 13:25:57.763736 systemd-networkd[1386]: lxc_health: Gained carrier Jan 30 13:25:58.071821 systemd-networkd[1386]: lxc9224ed60b019: Link UP Jan 30 13:25:58.078736 kernel: eth0: renamed from tmp0b636 Jan 30 13:25:58.085912 systemd-networkd[1386]: lxca57316d80c32: Link UP Jan 30 13:25:58.088385 systemd-networkd[1386]: lxc9224ed60b019: Gained carrier Jan 30 13:25:58.096498 kernel: eth0: renamed from tmp90f8d Jan 30 13:25:58.108868 systemd-networkd[1386]: lxca57316d80c32: Gained carrier Jan 30 13:25:58.306683 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Jan 30 13:25:59.266709 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 30 13:25:59.524668 systemd-networkd[1386]: lxca57316d80c32: Gained IPv6LL Jan 30 13:26:00.035704 systemd-networkd[1386]: lxc9224ed60b019: Gained IPv6LL Jan 30 13:26:03.014402 containerd[1492]: time="2025-01-30T13:26:03.013557002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:26:03.016412 containerd[1492]: time="2025-01-30T13:26:03.016117257Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:26:03.016412 containerd[1492]: time="2025-01-30T13:26:03.016152657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:03.016412 containerd[1492]: time="2025-01-30T13:26:03.016305938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:03.053075 systemd[1]: Started cri-containerd-90f8d9da32b953f53b601424dbe4a4f53c2d9800438ee139b59467b8edd2da42.scope - libcontainer container 90f8d9da32b953f53b601424dbe4a4f53c2d9800438ee139b59467b8edd2da42. Jan 30 13:26:03.066163 containerd[1492]: time="2025-01-30T13:26:03.065914146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:26:03.066163 containerd[1492]: time="2025-01-30T13:26:03.066001746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:26:03.066163 containerd[1492]: time="2025-01-30T13:26:03.066014386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:03.067050 containerd[1492]: time="2025-01-30T13:26:03.066247748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:26:03.104689 systemd[1]: run-containerd-runc-k8s.io-0b6364f683dc57a0a4cb53a50635cb15177051c35241085a4e7dbfdc1fcaf651-runc.VFVBKW.mount: Deactivated successfully. Jan 30 13:26:03.118759 systemd[1]: Started cri-containerd-0b6364f683dc57a0a4cb53a50635cb15177051c35241085a4e7dbfdc1fcaf651.scope - libcontainer container 0b6364f683dc57a0a4cb53a50635cb15177051c35241085a4e7dbfdc1fcaf651. Jan 30 13:26:03.136040 containerd[1492]: time="2025-01-30T13:26:03.135988473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zxtjh,Uid:85ed8820-dcbe-4c11-bbfd-359aded16a92,Namespace:kube-system,Attempt:0,} returns sandbox id \"90f8d9da32b953f53b601424dbe4a4f53c2d9800438ee139b59467b8edd2da42\"" Jan 30 13:26:03.148111 containerd[1492]: time="2025-01-30T13:26:03.147959382Z" level=info msg="CreateContainer within sandbox \"90f8d9da32b953f53b601424dbe4a4f53c2d9800438ee139b59467b8edd2da42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:26:03.173963 containerd[1492]: time="2025-01-30T13:26:03.173902693Z" level=info msg="CreateContainer within sandbox \"90f8d9da32b953f53b601424dbe4a4f53c2d9800438ee139b59467b8edd2da42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27da84ad5feb1e262b7a2a4dca9fa844ea2955ca0162fd00fafd0156af282189\"" Jan 30 13:26:03.177354 containerd[1492]: time="2025-01-30T13:26:03.175048219Z" level=info msg="StartContainer for \"27da84ad5feb1e262b7a2a4dca9fa844ea2955ca0162fd00fafd0156af282189\"" Jan 30 13:26:03.207353 containerd[1492]: time="2025-01-30T13:26:03.207049765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-prdmx,Uid:6e40a956-f7df-4fe7-912b-c3a7dad397ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b6364f683dc57a0a4cb53a50635cb15177051c35241085a4e7dbfdc1fcaf651\"" Jan 30 13:26:03.219734 containerd[1492]: time="2025-01-30T13:26:03.219241636Z" level=info msg="CreateContainer within sandbox \"0b6364f683dc57a0a4cb53a50635cb15177051c35241085a4e7dbfdc1fcaf651\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:26:03.231682 systemd[1]: Started cri-containerd-27da84ad5feb1e262b7a2a4dca9fa844ea2955ca0162fd00fafd0156af282189.scope - libcontainer container 27da84ad5feb1e262b7a2a4dca9fa844ea2955ca0162fd00fafd0156af282189. Jan 30 13:26:03.260382 containerd[1492]: time="2025-01-30T13:26:03.260333194Z" level=info msg="CreateContainer within sandbox \"0b6364f683dc57a0a4cb53a50635cb15177051c35241085a4e7dbfdc1fcaf651\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"293c5843862d13aaa0ff8b532a34b6dacc73ccc4e64a922762819d29f42564f5\"" Jan 30 13:26:03.262399 containerd[1492]: time="2025-01-30T13:26:03.262091725Z" level=info msg="StartContainer for \"293c5843862d13aaa0ff8b532a34b6dacc73ccc4e64a922762819d29f42564f5\"" Jan 30 13:26:03.286755 containerd[1492]: time="2025-01-30T13:26:03.285776942Z" level=info msg="StartContainer for \"27da84ad5feb1e262b7a2a4dca9fa844ea2955ca0162fd00fafd0156af282189\" returns successfully" Jan 30 13:26:03.317712 systemd[1]: Started cri-containerd-293c5843862d13aaa0ff8b532a34b6dacc73ccc4e64a922762819d29f42564f5.scope - libcontainer container 293c5843862d13aaa0ff8b532a34b6dacc73ccc4e64a922762819d29f42564f5. Jan 30 13:26:03.380114 containerd[1492]: time="2025-01-30T13:26:03.378320719Z" level=info msg="StartContainer for \"293c5843862d13aaa0ff8b532a34b6dacc73ccc4e64a922762819d29f42564f5\" returns successfully" Jan 30 13:26:03.965636 kubelet[2830]: I0130 13:26:03.965100 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zxtjh" podStartSLOduration=21.965077045 podStartE2EDuration="21.965077045s" podCreationTimestamp="2025-01-30 13:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:26:03.931123688 +0000 UTC m=+37.388229823" watchObservedRunningTime="2025-01-30 13:26:03.965077045 +0000 UTC m=+37.422183180" Jan 30 13:26:03.965636 kubelet[2830]: I0130 13:26:03.965227 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-prdmx" podStartSLOduration=21.965223126 podStartE2EDuration="21.965223126s" podCreationTimestamp="2025-01-30 13:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:26:03.964775604 +0000 UTC m=+37.421881739" watchObservedRunningTime="2025-01-30 13:26:03.965223126 +0000 UTC m=+37.422329261" Jan 30 13:26:05.813724 update_engine[1467]: I20250130 13:26:05.813107 1467 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 13:26:05.813724 update_engine[1467]: I20250130 13:26:05.813176 1467 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 13:26:05.813724 update_engine[1467]: I20250130 13:26:05.813526 1467 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 13:26:05.814630 update_engine[1467]: I20250130 13:26:05.813930 1467 omaha_request_params.cc:62] Current group set to beta Jan 30 13:26:05.814630 update_engine[1467]: I20250130 13:26:05.814056 1467 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 13:26:05.814630 update_engine[1467]: I20250130 13:26:05.814076 1467 update_attempter.cc:643] Scheduling an action processor start. Jan 30 13:26:05.814630 update_engine[1467]: I20250130 13:26:05.814096 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:26:05.814630 update_engine[1467]: I20250130 13:26:05.814147 1467 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 13:26:05.814630 update_engine[1467]: I20250130 13:26:05.814218 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:26:05.814630 update_engine[1467]: I20250130 13:26:05.814232 1467 omaha_request_action.cc:272] Request: Jan 30 13:26:05.814630 update_engine[1467]: Jan 30 13:26:05.814630 update_engine[1467]: Jan 30 13:26:05.814630 update_engine[1467]: Jan 30 13:26:05.814630 update_engine[1467]: Jan 30 13:26:05.814630 update_engine[1467]: Jan 30 13:26:05.814630 update_engine[1467]: Jan 30 13:26:05.814630 update_engine[1467]: Jan 30 13:26:05.814630 update_engine[1467]: Jan 30 13:26:05.814630 update_engine[1467]: I20250130 13:26:05.814240 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:26:05.816449 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 13:26:05.817222 update_engine[1467]: I20250130 13:26:05.817148 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:26:05.817743 update_engine[1467]: I20250130 13:26:05.817703 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:26:05.818540 update_engine[1467]: E20250130 13:26:05.818499 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:26:05.818626 update_engine[1467]: I20250130 13:26:05.818588 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 13:26:15.821730 update_engine[1467]: I20250130 13:26:15.821583 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:26:15.822746 update_engine[1467]: I20250130 13:26:15.821873 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:26:15.822746 update_engine[1467]: I20250130 13:26:15.822147 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:26:15.822746 update_engine[1467]: E20250130 13:26:15.822726 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:26:15.822847 update_engine[1467]: I20250130 13:26:15.822791 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 13:26:25.822080 update_engine[1467]: I20250130 13:26:25.821882 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:26:25.822781 update_engine[1467]: I20250130 13:26:25.822288 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:26:25.822781 update_engine[1467]: I20250130 13:26:25.822675 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:26:25.823091 update_engine[1467]: E20250130 13:26:25.823037 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:26:25.823148 update_engine[1467]: I20250130 13:26:25.823099 1467 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 13:26:35.821674 update_engine[1467]: I20250130 13:26:35.821558 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:26:35.822175 update_engine[1467]: I20250130 13:26:35.822022 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:26:35.822429 update_engine[1467]: I20250130 13:26:35.822358 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:26:35.822865 update_engine[1467]: E20250130 13:26:35.822809 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:26:35.823283 update_engine[1467]: I20250130 13:26:35.822887 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:26:35.823360 update_engine[1467]: I20250130 13:26:35.823283 1467 omaha_request_action.cc:617] Omaha request response: Jan 30 13:26:35.823488 update_engine[1467]: E20250130 13:26:35.823418 1467 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 13:26:35.823488 update_engine[1467]: I20250130 13:26:35.823462 1467 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 13:26:35.823573 update_engine[1467]: I20250130 13:26:35.823496 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:26:35.823573 update_engine[1467]: I20250130 13:26:35.823505 1467 update_attempter.cc:306] Processing Done. Jan 30 13:26:35.823573 update_engine[1467]: E20250130 13:26:35.823524 1467 update_attempter.cc:619] Update failed. Jan 30 13:26:35.823573 update_engine[1467]: I20250130 13:26:35.823532 1467 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 13:26:35.823573 update_engine[1467]: I20250130 13:26:35.823540 1467 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 13:26:35.823573 update_engine[1467]: I20250130 13:26:35.823549 1467 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 13:26:35.823765 update_engine[1467]: I20250130 13:26:35.823652 1467 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:26:35.823765 update_engine[1467]: I20250130 13:26:35.823686 1467 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:26:35.823765 update_engine[1467]: I20250130 13:26:35.823695 1467 omaha_request_action.cc:272] Request: Jan 30 13:26:35.823765 update_engine[1467]: Jan 30 13:26:35.823765 update_engine[1467]: Jan 30 13:26:35.823765 update_engine[1467]: Jan 30 13:26:35.823765 update_engine[1467]: Jan 30 13:26:35.823765 update_engine[1467]: Jan 30 13:26:35.823765 update_engine[1467]: Jan 30 13:26:35.823765 update_engine[1467]: I20250130 13:26:35.823704 1467 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:26:35.823981 update_engine[1467]: I20250130 13:26:35.823918 1467 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:26:35.824269 update_engine[1467]: I20250130 13:26:35.824195 1467 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:26:35.824622 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 13:26:35.825244 locksmithd[1501]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 13:26:35.825301 update_engine[1467]: E20250130 13:26:35.824687 1467 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:26:35.825301 update_engine[1467]: I20250130 13:26:35.824752 1467 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:26:35.825301 update_engine[1467]: I20250130 13:26:35.824762 1467 omaha_request_action.cc:617] Omaha request response: Jan 30 13:26:35.825301 update_engine[1467]: I20250130 13:26:35.824774 1467 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:26:35.825301 update_engine[1467]: I20250130 13:26:35.824780 1467 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:26:35.825301 update_engine[1467]: I20250130 13:26:35.824788 1467 update_attempter.cc:306] Processing Done. Jan 30 13:26:35.825301 update_engine[1467]: I20250130 13:26:35.824798 1467 update_attempter.cc:310] Error event sent. Jan 30 13:26:35.825301 update_engine[1467]: I20250130 13:26:35.824812 1467 update_check_scheduler.cc:74] Next update check in 47m22s Jan 30 13:26:50.584488 kernel: hrtimer: interrupt took 4144929 ns Jan 30 13:29:02.834980 systemd[1]: Started sshd@7-78.46.226.143:22-35.219.59.197:40288.service - OpenSSH per-connection server daemon (35.219.59.197:40288). Jan 30 13:29:04.070895 sshd[4238]: Invalid user es from 35.219.59.197 port 40288 Jan 30 13:29:04.316430 sshd[4238]: Received disconnect from 35.219.59.197 port 40288:11: Bye Bye [preauth] Jan 30 13:29:04.316430 sshd[4238]: Disconnected from invalid user es 35.219.59.197 port 40288 [preauth] Jan 30 13:29:04.321281 systemd[1]: sshd@7-78.46.226.143:22-35.219.59.197:40288.service: Deactivated successfully. Jan 30 13:30:16.091339 systemd[1]: Started sshd@8-78.46.226.143:22-139.178.68.195:45662.service - OpenSSH per-connection server daemon (139.178.68.195:45662). Jan 30 13:30:17.099246 sshd[4253]: Accepted publickey for core from 139.178.68.195 port 45662 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:17.101287 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:17.109218 systemd-logind[1463]: New session 8 of user core. Jan 30 13:30:17.114758 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:30:17.892578 sshd[4255]: Connection closed by 139.178.68.195 port 45662 Jan 30 13:30:17.893399 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:17.898281 systemd[1]: sshd@8-78.46.226.143:22-139.178.68.195:45662.service: Deactivated successfully. Jan 30 13:30:17.900318 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:30:17.903337 systemd-logind[1463]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:30:17.905037 systemd-logind[1463]: Removed session 8. Jan 30 13:30:23.064689 systemd[1]: Started sshd@9-78.46.226.143:22-139.178.68.195:45670.service - OpenSSH per-connection server daemon (139.178.68.195:45670). Jan 30 13:30:24.092060 sshd[4269]: Accepted publickey for core from 139.178.68.195 port 45670 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:24.096202 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:24.102345 systemd-logind[1463]: New session 9 of user core. Jan 30 13:30:24.112821 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:30:24.875301 sshd[4271]: Connection closed by 139.178.68.195 port 45670 Jan 30 13:30:24.878015 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:24.883663 systemd[1]: sshd@9-78.46.226.143:22-139.178.68.195:45670.service: Deactivated successfully. Jan 30 13:30:24.886758 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:30:24.891098 systemd-logind[1463]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:30:24.894558 systemd-logind[1463]: Removed session 9. Jan 30 13:30:30.060063 systemd[1]: Started sshd@10-78.46.226.143:22-139.178.68.195:50232.service - OpenSSH per-connection server daemon (139.178.68.195:50232). Jan 30 13:30:31.077138 sshd[4285]: Accepted publickey for core from 139.178.68.195 port 50232 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:31.081697 sshd-session[4285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:31.089167 systemd-logind[1463]: New session 10 of user core. Jan 30 13:30:31.099768 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:30:31.750046 systemd[1]: Started sshd@11-78.46.226.143:22-103.234.151.55:35578.service - OpenSSH per-connection server daemon (103.234.151.55:35578). Jan 30 13:30:31.854921 sshd[4287]: Connection closed by 139.178.68.195 port 50232 Jan 30 13:30:31.855932 sshd-session[4285]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:31.862019 systemd[1]: sshd@10-78.46.226.143:22-139.178.68.195:50232.service: Deactivated successfully. Jan 30 13:30:31.866486 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:30:31.868907 systemd-logind[1463]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:30:31.871120 systemd-logind[1463]: Removed session 10. Jan 30 13:30:32.048273 systemd[1]: Started sshd@12-78.46.226.143:22-139.178.68.195:50240.service - OpenSSH per-connection server daemon (139.178.68.195:50240). Jan 30 13:30:32.962609 sshd[4297]: Invalid user deploy from 103.234.151.55 port 35578 Jan 30 13:30:33.036131 sshd[4302]: Accepted publickey for core from 139.178.68.195 port 50240 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:33.038105 sshd-session[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:33.044854 systemd-logind[1463]: New session 11 of user core. Jan 30 13:30:33.048749 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:30:33.192229 sshd[4297]: Received disconnect from 103.234.151.55 port 35578:11: Bye Bye [preauth] Jan 30 13:30:33.192229 sshd[4297]: Disconnected from invalid user deploy 103.234.151.55 port 35578 [preauth] Jan 30 13:30:33.195076 systemd[1]: sshd@11-78.46.226.143:22-103.234.151.55:35578.service: Deactivated successfully. Jan 30 13:30:33.859655 sshd[4304]: Connection closed by 139.178.68.195 port 50240 Jan 30 13:30:33.859732 sshd-session[4302]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:33.865505 systemd-logind[1463]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:30:33.865757 systemd[1]: sshd@12-78.46.226.143:22-139.178.68.195:50240.service: Deactivated successfully. Jan 30 13:30:33.869012 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:30:33.870498 systemd-logind[1463]: Removed session 11. Jan 30 13:30:34.034921 systemd[1]: Started sshd@13-78.46.226.143:22-139.178.68.195:50254.service - OpenSSH per-connection server daemon (139.178.68.195:50254). Jan 30 13:30:35.017643 sshd[4315]: Accepted publickey for core from 139.178.68.195 port 50254 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:35.018803 sshd-session[4315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:35.032017 systemd-logind[1463]: New session 12 of user core. Jan 30 13:30:35.038900 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:30:35.778552 sshd[4317]: Connection closed by 139.178.68.195 port 50254 Jan 30 13:30:35.779851 sshd-session[4315]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:35.786440 systemd-logind[1463]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:30:35.786511 systemd[1]: sshd@13-78.46.226.143:22-139.178.68.195:50254.service: Deactivated successfully. Jan 30 13:30:35.788655 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:30:35.791306 systemd-logind[1463]: Removed session 12. Jan 30 13:30:40.965784 systemd[1]: Started sshd@14-78.46.226.143:22-139.178.68.195:53592.service - OpenSSH per-connection server daemon (139.178.68.195:53592). Jan 30 13:30:41.966663 sshd[4328]: Accepted publickey for core from 139.178.68.195 port 53592 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:41.968835 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:41.974769 systemd-logind[1463]: New session 13 of user core. Jan 30 13:30:41.977753 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:30:42.734813 sshd[4330]: Connection closed by 139.178.68.195 port 53592 Jan 30 13:30:42.735565 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:42.739405 systemd[1]: sshd@14-78.46.226.143:22-139.178.68.195:53592.service: Deactivated successfully. Jan 30 13:30:42.743307 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:30:42.744951 systemd-logind[1463]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:30:42.746492 systemd-logind[1463]: Removed session 13. Jan 30 13:30:42.913804 systemd[1]: Started sshd@15-78.46.226.143:22-139.178.68.195:53600.service - OpenSSH per-connection server daemon (139.178.68.195:53600). Jan 30 13:30:43.919790 sshd[4340]: Accepted publickey for core from 139.178.68.195 port 53600 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:43.922036 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:43.927432 systemd-logind[1463]: New session 14 of user core. Jan 30 13:30:43.935829 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:30:44.744655 sshd[4344]: Connection closed by 139.178.68.195 port 53600 Jan 30 13:30:44.745354 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:44.754613 systemd[1]: sshd@15-78.46.226.143:22-139.178.68.195:53600.service: Deactivated successfully. Jan 30 13:30:44.758211 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:30:44.760029 systemd-logind[1463]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:30:44.761749 systemd-logind[1463]: Removed session 14. Jan 30 13:30:44.913522 systemd[1]: Started sshd@16-78.46.226.143:22-139.178.68.195:53606.service - OpenSSH per-connection server daemon (139.178.68.195:53606). Jan 30 13:30:45.913617 sshd[4352]: Accepted publickey for core from 139.178.68.195 port 53606 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:45.916331 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:45.926767 systemd-logind[1463]: New session 15 of user core. Jan 30 13:30:45.929716 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:30:48.341113 sshd[4354]: Connection closed by 139.178.68.195 port 53606 Jan 30 13:30:48.342802 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:48.349158 systemd[1]: sshd@16-78.46.226.143:22-139.178.68.195:53606.service: Deactivated successfully. Jan 30 13:30:48.353980 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:30:48.355324 systemd-logind[1463]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:30:48.357001 systemd-logind[1463]: Removed session 15. Jan 30 13:30:48.511990 systemd[1]: Started sshd@17-78.46.226.143:22-139.178.68.195:48216.service - OpenSSH per-connection server daemon (139.178.68.195:48216). Jan 30 13:30:49.496518 sshd[4369]: Accepted publickey for core from 139.178.68.195 port 48216 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:49.498799 sshd-session[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:49.508451 systemd-logind[1463]: New session 16 of user core. Jan 30 13:30:49.521812 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:30:50.388924 sshd[4371]: Connection closed by 139.178.68.195 port 48216 Jan 30 13:30:50.390439 sshd-session[4369]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:50.396645 systemd[1]: sshd@17-78.46.226.143:22-139.178.68.195:48216.service: Deactivated successfully. Jan 30 13:30:50.398851 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:30:50.400648 systemd-logind[1463]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:30:50.402054 systemd-logind[1463]: Removed session 16. Jan 30 13:30:50.564892 systemd[1]: Started sshd@18-78.46.226.143:22-139.178.68.195:48226.service - OpenSSH per-connection server daemon (139.178.68.195:48226). Jan 30 13:30:51.573194 sshd[4380]: Accepted publickey for core from 139.178.68.195 port 48226 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:51.576722 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:51.582834 systemd-logind[1463]: New session 17 of user core. Jan 30 13:30:51.587807 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:30:52.341462 sshd[4382]: Connection closed by 139.178.68.195 port 48226 Jan 30 13:30:52.342144 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:52.345977 systemd-logind[1463]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:30:52.346335 systemd[1]: sshd@18-78.46.226.143:22-139.178.68.195:48226.service: Deactivated successfully. Jan 30 13:30:52.349825 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:30:52.352514 systemd-logind[1463]: Removed session 17. Jan 30 13:30:57.530782 systemd[1]: Started sshd@19-78.46.226.143:22-139.178.68.195:46122.service - OpenSSH per-connection server daemon (139.178.68.195:46122). Jan 30 13:30:58.518531 sshd[4395]: Accepted publickey for core from 139.178.68.195 port 46122 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:30:58.521141 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:30:58.533558 systemd-logind[1463]: New session 18 of user core. Jan 30 13:30:58.543197 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:30:59.276358 sshd[4397]: Connection closed by 139.178.68.195 port 46122 Jan 30 13:30:59.277830 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jan 30 13:30:59.285380 systemd[1]: sshd@19-78.46.226.143:22-139.178.68.195:46122.service: Deactivated successfully. Jan 30 13:30:59.288796 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:30:59.291877 systemd-logind[1463]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:30:59.294266 systemd-logind[1463]: Removed session 18. Jan 30 13:31:04.461761 systemd[1]: Started sshd@20-78.46.226.143:22-139.178.68.195:46134.service - OpenSSH per-connection server daemon (139.178.68.195:46134). Jan 30 13:31:05.471018 sshd[4407]: Accepted publickey for core from 139.178.68.195 port 46134 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:05.473260 sshd-session[4407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:05.480958 systemd-logind[1463]: New session 19 of user core. Jan 30 13:31:05.490861 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:31:06.259597 sshd[4409]: Connection closed by 139.178.68.195 port 46134 Jan 30 13:31:06.260398 sshd-session[4407]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:06.267366 systemd[1]: sshd@20-78.46.226.143:22-139.178.68.195:46134.service: Deactivated successfully. Jan 30 13:31:06.270287 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:31:06.275743 systemd-logind[1463]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:31:06.277760 systemd-logind[1463]: Removed session 19. Jan 30 13:31:06.440951 systemd[1]: Started sshd@21-78.46.226.143:22-139.178.68.195:57264.service - OpenSSH per-connection server daemon (139.178.68.195:57264). Jan 30 13:31:07.431195 sshd[4420]: Accepted publickey for core from 139.178.68.195 port 57264 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:07.434300 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:07.443455 systemd-logind[1463]: New session 20 of user core. Jan 30 13:31:07.452260 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:31:10.158161 systemd[1]: run-containerd-runc-k8s.io-b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf-runc.58MOeb.mount: Deactivated successfully. Jan 30 13:31:10.164069 containerd[1492]: time="2025-01-30T13:31:10.164000268Z" level=info msg="StopContainer for \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\" with timeout 30 (s)" Jan 30 13:31:10.166263 containerd[1492]: time="2025-01-30T13:31:10.165328850Z" level=info msg="Stop container \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\" with signal terminated" Jan 30 13:31:10.186364 containerd[1492]: time="2025-01-30T13:31:10.186281992Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:31:10.199362 containerd[1492]: time="2025-01-30T13:31:10.199288564Z" level=info msg="StopContainer for \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\" with timeout 2 (s)" Jan 30 13:31:10.201386 containerd[1492]: time="2025-01-30T13:31:10.200834149Z" level=info msg="Stop container \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\" with signal terminated" Jan 30 13:31:10.204218 systemd[1]: cri-containerd-cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735.scope: Deactivated successfully. Jan 30 13:31:10.217846 systemd-networkd[1386]: lxc_health: Link DOWN Jan 30 13:31:10.217854 systemd-networkd[1386]: lxc_health: Lost carrier Jan 30 13:31:10.260080 systemd[1]: cri-containerd-b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf.scope: Deactivated successfully. Jan 30 13:31:10.260590 systemd[1]: cri-containerd-b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf.scope: Consumed 9.204s CPU time. Jan 30 13:31:10.265388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735-rootfs.mount: Deactivated successfully. Jan 30 13:31:10.276112 containerd[1492]: time="2025-01-30T13:31:10.275775012Z" level=info msg="shim disconnected" id=cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735 namespace=k8s.io Jan 30 13:31:10.276112 containerd[1492]: time="2025-01-30T13:31:10.275841133Z" level=warning msg="cleaning up after shim disconnected" id=cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735 namespace=k8s.io Jan 30 13:31:10.276112 containerd[1492]: time="2025-01-30T13:31:10.275850333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:10.293257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf-rootfs.mount: Deactivated successfully. Jan 30 13:31:10.300739 containerd[1492]: time="2025-01-30T13:31:10.300220691Z" level=info msg="shim disconnected" id=b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf namespace=k8s.io Jan 30 13:31:10.300739 containerd[1492]: time="2025-01-30T13:31:10.300596577Z" level=warning msg="cleaning up after shim disconnected" id=b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf namespace=k8s.io Jan 30 13:31:10.300739 containerd[1492]: time="2025-01-30T13:31:10.300620737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:10.304631 containerd[1492]: time="2025-01-30T13:31:10.304558282Z" level=info msg="StopContainer for \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\" returns successfully" Jan 30 13:31:10.305685 containerd[1492]: time="2025-01-30T13:31:10.305360215Z" level=info msg="StopPodSandbox for \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\"" Jan 30 13:31:10.305685 containerd[1492]: time="2025-01-30T13:31:10.305412175Z" level=info msg="Container to stop \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:10.308037 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98-shm.mount: Deactivated successfully. Jan 30 13:31:10.321425 systemd[1]: cri-containerd-adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98.scope: Deactivated successfully. Jan 30 13:31:10.333273 containerd[1492]: time="2025-01-30T13:31:10.333212589Z" level=info msg="StopContainer for \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\" returns successfully" Jan 30 13:31:10.334353 containerd[1492]: time="2025-01-30T13:31:10.333865200Z" level=info msg="StopPodSandbox for \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\"" Jan 30 13:31:10.334353 containerd[1492]: time="2025-01-30T13:31:10.333917281Z" level=info msg="Container to stop \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:10.334353 containerd[1492]: time="2025-01-30T13:31:10.333929601Z" level=info msg="Container to stop \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:10.334353 containerd[1492]: time="2025-01-30T13:31:10.333938561Z" level=info msg="Container to stop \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:10.334353 containerd[1492]: time="2025-01-30T13:31:10.333948081Z" level=info msg="Container to stop \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:10.334353 containerd[1492]: time="2025-01-30T13:31:10.333957881Z" level=info msg="Container to stop \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:31:10.346353 systemd[1]: cri-containerd-3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72.scope: Deactivated successfully. Jan 30 13:31:10.372819 containerd[1492]: time="2025-01-30T13:31:10.372333107Z" level=info msg="shim disconnected" id=adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98 namespace=k8s.io Jan 30 13:31:10.372819 containerd[1492]: time="2025-01-30T13:31:10.372616032Z" level=warning msg="cleaning up after shim disconnected" id=adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98 namespace=k8s.io Jan 30 13:31:10.372819 containerd[1492]: time="2025-01-30T13:31:10.372624992Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:10.393001 containerd[1492]: time="2025-01-30T13:31:10.392640079Z" level=info msg="shim disconnected" id=3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72 namespace=k8s.io Jan 30 13:31:10.393001 containerd[1492]: time="2025-01-30T13:31:10.392787881Z" level=warning msg="cleaning up after shim disconnected" id=3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72 namespace=k8s.io Jan 30 13:31:10.393001 containerd[1492]: time="2025-01-30T13:31:10.392799921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:10.393727 containerd[1492]: time="2025-01-30T13:31:10.393463972Z" level=info msg="TearDown network for sandbox \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\" successfully" Jan 30 13:31:10.393727 containerd[1492]: time="2025-01-30T13:31:10.393523293Z" level=info msg="StopPodSandbox for \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\" returns successfully" Jan 30 13:31:10.422199 containerd[1492]: time="2025-01-30T13:31:10.420911140Z" level=info msg="TearDown network for sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" successfully" Jan 30 13:31:10.422199 containerd[1492]: time="2025-01-30T13:31:10.420956181Z" level=info msg="StopPodSandbox for \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" returns successfully" Jan 30 13:31:10.482253 kubelet[2830]: I0130 13:31:10.480849 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250d3552-7dbe-45aa-926c-9aed6f589159-hubble-tls\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.482253 kubelet[2830]: I0130 13:31:10.480937 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250d3552-7dbe-45aa-926c-9aed6f589159-clustermesh-secrets\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.482253 kubelet[2830]: I0130 13:31:10.480974 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-host-proc-sys-net\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.482253 kubelet[2830]: I0130 13:31:10.481006 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-lib-modules\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.482253 kubelet[2830]: I0130 13:31:10.481044 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-host-proc-sys-kernel\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.482253 kubelet[2830]: I0130 13:31:10.481075 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-run\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483057 kubelet[2830]: I0130 13:31:10.481116 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-config-path\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483057 kubelet[2830]: I0130 13:31:10.481147 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-hostproc\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483057 kubelet[2830]: I0130 13:31:10.481219 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cef61b5-a560-4b0f-b7cb-c966ab39c322-cilium-config-path\") pod \"4cef61b5-a560-4b0f-b7cb-c966ab39c322\" (UID: \"4cef61b5-a560-4b0f-b7cb-c966ab39c322\") " Jan 30 13:31:10.483057 kubelet[2830]: I0130 13:31:10.481261 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-cgroup\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483057 kubelet[2830]: I0130 13:31:10.481314 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-etc-cni-netd\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483057 kubelet[2830]: I0130 13:31:10.481357 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t2d6\" (UniqueName: \"kubernetes.io/projected/250d3552-7dbe-45aa-926c-9aed6f589159-kube-api-access-4t2d6\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483237 kubelet[2830]: I0130 13:31:10.481387 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-xtables-lock\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483237 kubelet[2830]: I0130 13:31:10.481456 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjkm9\" (UniqueName: \"kubernetes.io/projected/4cef61b5-a560-4b0f-b7cb-c966ab39c322-kube-api-access-kjkm9\") pod \"4cef61b5-a560-4b0f-b7cb-c966ab39c322\" (UID: \"4cef61b5-a560-4b0f-b7cb-c966ab39c322\") " Jan 30 13:31:10.483237 kubelet[2830]: I0130 13:31:10.481536 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-bpf-maps\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483237 kubelet[2830]: I0130 13:31:10.481566 2830 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cni-path\") pod \"250d3552-7dbe-45aa-926c-9aed6f589159\" (UID: \"250d3552-7dbe-45aa-926c-9aed6f589159\") " Jan 30 13:31:10.483237 kubelet[2830]: I0130 13:31:10.481723 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cni-path" (OuterVolumeSpecName: "cni-path") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.483237 kubelet[2830]: I0130 13:31:10.482615 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-hostproc" (OuterVolumeSpecName: "hostproc") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.483948 kubelet[2830]: I0130 13:31:10.483876 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.484095 kubelet[2830]: I0130 13:31:10.483963 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.484141 kubelet[2830]: I0130 13:31:10.484104 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.487510 kubelet[2830]: I0130 13:31:10.485741 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.487510 kubelet[2830]: I0130 13:31:10.487117 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.487510 kubelet[2830]: I0130 13:31:10.487174 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.491175 kubelet[2830]: I0130 13:31:10.488868 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.491386 kubelet[2830]: I0130 13:31:10.489900 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:31:10.491442 kubelet[2830]: I0130 13:31:10.490437 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:31:10.492155 kubelet[2830]: I0130 13:31:10.492118 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cef61b5-a560-4b0f-b7cb-c966ab39c322-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4cef61b5-a560-4b0f-b7cb-c966ab39c322" (UID: "4cef61b5-a560-4b0f-b7cb-c966ab39c322"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:31:10.492433 kubelet[2830]: I0130 13:31:10.492411 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/250d3552-7dbe-45aa-926c-9aed6f589159-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:31:10.493156 kubelet[2830]: I0130 13:31:10.493130 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250d3552-7dbe-45aa-926c-9aed6f589159-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:31:10.495783 kubelet[2830]: I0130 13:31:10.495715 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/250d3552-7dbe-45aa-926c-9aed6f589159-kube-api-access-4t2d6" (OuterVolumeSpecName: "kube-api-access-4t2d6") pod "250d3552-7dbe-45aa-926c-9aed6f589159" (UID: "250d3552-7dbe-45aa-926c-9aed6f589159"). InnerVolumeSpecName "kube-api-access-4t2d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:31:10.496693 kubelet[2830]: I0130 13:31:10.496647 2830 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cef61b5-a560-4b0f-b7cb-c966ab39c322-kube-api-access-kjkm9" (OuterVolumeSpecName: "kube-api-access-kjkm9") pod "4cef61b5-a560-4b0f-b7cb-c966ab39c322" (UID: "4cef61b5-a560-4b0f-b7cb-c966ab39c322"). InnerVolumeSpecName "kube-api-access-kjkm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:31:10.582565 kubelet[2830]: I0130 13:31:10.582523 2830 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-hostproc\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.582790 kubelet[2830]: I0130 13:31:10.582771 2830 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-run\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.582891 kubelet[2830]: I0130 13:31:10.582879 2830 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-config-path\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.582970 kubelet[2830]: I0130 13:31:10.582958 2830 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4t2d6\" (UniqueName: \"kubernetes.io/projected/250d3552-7dbe-45aa-926c-9aed6f589159-kube-api-access-4t2d6\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583045 kubelet[2830]: I0130 13:31:10.583035 2830 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cef61b5-a560-4b0f-b7cb-c966ab39c322-cilium-config-path\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583127 kubelet[2830]: I0130 13:31:10.583115 2830 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cilium-cgroup\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583195 kubelet[2830]: I0130 13:31:10.583185 2830 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-etc-cni-netd\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583263 kubelet[2830]: I0130 13:31:10.583246 2830 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kjkm9\" (UniqueName: \"kubernetes.io/projected/4cef61b5-a560-4b0f-b7cb-c966ab39c322-kube-api-access-kjkm9\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583320 kubelet[2830]: I0130 13:31:10.583311 2830 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-xtables-lock\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583382 kubelet[2830]: I0130 13:31:10.583365 2830 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-bpf-maps\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583433 kubelet[2830]: I0130 13:31:10.583425 2830 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-cni-path\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583526 kubelet[2830]: I0130 13:31:10.583514 2830 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/250d3552-7dbe-45aa-926c-9aed6f589159-hubble-tls\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583616 kubelet[2830]: I0130 13:31:10.583606 2830 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/250d3552-7dbe-45aa-926c-9aed6f589159-clustermesh-secrets\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583681 kubelet[2830]: I0130 13:31:10.583662 2830 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-host-proc-sys-net\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583734 kubelet[2830]: I0130 13:31:10.583724 2830 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-lib-modules\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.583796 kubelet[2830]: I0130 13:31:10.583787 2830 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/250d3552-7dbe-45aa-926c-9aed6f589159-host-proc-sys-kernel\") on node \"ci-4186-1-0-1-a1b218d3fd\" DevicePath \"\"" Jan 30 13:31:10.684282 systemd[1]: Removed slice kubepods-burstable-pod250d3552_7dbe_45aa_926c_9aed6f589159.slice - libcontainer container kubepods-burstable-pod250d3552_7dbe_45aa_926c_9aed6f589159.slice. Jan 30 13:31:10.685751 systemd[1]: kubepods-burstable-pod250d3552_7dbe_45aa_926c_9aed6f589159.slice: Consumed 9.309s CPU time. Jan 30 13:31:10.689065 systemd[1]: Removed slice kubepods-besteffort-pod4cef61b5_a560_4b0f_b7cb_c966ab39c322.slice - libcontainer container kubepods-besteffort-pod4cef61b5_a560_4b0f_b7cb_c966ab39c322.slice. Jan 30 13:31:10.805545 kubelet[2830]: I0130 13:31:10.804263 2830 scope.go:117] "RemoveContainer" containerID="b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf" Jan 30 13:31:10.809199 containerd[1492]: time="2025-01-30T13:31:10.808568626Z" level=info msg="RemoveContainer for \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\"" Jan 30 13:31:10.822613 containerd[1492]: time="2025-01-30T13:31:10.822544254Z" level=info msg="RemoveContainer for \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\" returns successfully" Jan 30 13:31:10.823183 kubelet[2830]: I0130 13:31:10.823147 2830 scope.go:117] "RemoveContainer" containerID="55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14" Jan 30 13:31:10.826264 containerd[1492]: time="2025-01-30T13:31:10.826217194Z" level=info msg="RemoveContainer for \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\"" Jan 30 13:31:10.832509 containerd[1492]: time="2025-01-30T13:31:10.832406095Z" level=info msg="RemoveContainer for \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\" returns successfully" Jan 30 13:31:10.833650 kubelet[2830]: I0130 13:31:10.832817 2830 scope.go:117] "RemoveContainer" containerID="db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8" Jan 30 13:31:10.836716 containerd[1492]: time="2025-01-30T13:31:10.836663204Z" level=info msg="RemoveContainer for \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\"" Jan 30 13:31:10.844602 containerd[1492]: time="2025-01-30T13:31:10.841771247Z" level=info msg="RemoveContainer for \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\" returns successfully" Jan 30 13:31:10.844859 kubelet[2830]: I0130 13:31:10.842118 2830 scope.go:117] "RemoveContainer" containerID="8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b" Jan 30 13:31:10.849120 containerd[1492]: time="2025-01-30T13:31:10.848708281Z" level=info msg="RemoveContainer for \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\"" Jan 30 13:31:10.858499 containerd[1492]: time="2025-01-30T13:31:10.855999240Z" level=info msg="RemoveContainer for \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\" returns successfully" Jan 30 13:31:10.861750 kubelet[2830]: I0130 13:31:10.861697 2830 scope.go:117] "RemoveContainer" containerID="0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0" Jan 30 13:31:10.873059 containerd[1492]: time="2025-01-30T13:31:10.872989477Z" level=info msg="RemoveContainer for \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\"" Jan 30 13:31:10.880120 containerd[1492]: time="2025-01-30T13:31:10.878809212Z" level=info msg="RemoveContainer for \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\" returns successfully" Jan 30 13:31:10.880535 kubelet[2830]: I0130 13:31:10.880463 2830 scope.go:117] "RemoveContainer" containerID="b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf" Jan 30 13:31:10.882767 containerd[1492]: time="2025-01-30T13:31:10.882706315Z" level=error msg="ContainerStatus for \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\": not found" Jan 30 13:31:10.883703 kubelet[2830]: E0130 13:31:10.882958 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\": not found" containerID="b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf" Jan 30 13:31:10.883703 kubelet[2830]: I0130 13:31:10.883006 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf"} err="failed to get container status \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3448e0272fe07c0f49975ee2c6f9c37f09a72f4a86dcceecabd83bcb13c89bf\": not found" Jan 30 13:31:10.883703 kubelet[2830]: I0130 13:31:10.883108 2830 scope.go:117] "RemoveContainer" containerID="55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14" Jan 30 13:31:10.886755 containerd[1492]: time="2025-01-30T13:31:10.886695420Z" level=error msg="ContainerStatus for \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\": not found" Jan 30 13:31:10.887169 kubelet[2830]: E0130 13:31:10.886901 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\": not found" containerID="55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14" Jan 30 13:31:10.887169 kubelet[2830]: I0130 13:31:10.886938 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14"} err="failed to get container status \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\": rpc error: code = NotFound desc = an error occurred when try to find container \"55e447f9c48f8e6f45d1c649b7dce87aa273cea6ca834f68bb9f59babb189e14\": not found" Jan 30 13:31:10.887169 kubelet[2830]: I0130 13:31:10.886967 2830 scope.go:117] "RemoveContainer" containerID="db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8" Jan 30 13:31:10.889403 kubelet[2830]: E0130 13:31:10.887404 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\": not found" containerID="db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8" Jan 30 13:31:10.889403 kubelet[2830]: I0130 13:31:10.887427 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8"} err="failed to get container status \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\": rpc error: code = NotFound desc = an error occurred when try to find container \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\": not found" Jan 30 13:31:10.889403 kubelet[2830]: I0130 13:31:10.887446 2830 scope.go:117] "RemoveContainer" containerID="8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b" Jan 30 13:31:10.889403 kubelet[2830]: E0130 13:31:10.887836 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\": not found" containerID="8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b" Jan 30 13:31:10.889403 kubelet[2830]: I0130 13:31:10.887859 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b"} err="failed to get container status \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\": not found" Jan 30 13:31:10.889403 kubelet[2830]: I0130 13:31:10.887875 2830 scope.go:117] "RemoveContainer" containerID="0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0" Jan 30 13:31:10.889644 containerd[1492]: time="2025-01-30T13:31:10.887173068Z" level=error msg="ContainerStatus for \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db23d94d810ab29573f1ca9d925b7402c48179709d96bd89cf1ae182b2a60dd8\": not found" Jan 30 13:31:10.889644 containerd[1492]: time="2025-01-30T13:31:10.887703557Z" level=error msg="ContainerStatus for \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a0f6f17a3ed7c0bcb9b073f051905a05eb1e77715fbfc5bf58b5116452f962b\": not found" Jan 30 13:31:10.889644 containerd[1492]: time="2025-01-30T13:31:10.888035682Z" level=error msg="ContainerStatus for \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\": not found" Jan 30 13:31:10.889720 kubelet[2830]: E0130 13:31:10.888140 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\": not found" containerID="0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0" Jan 30 13:31:10.889720 kubelet[2830]: I0130 13:31:10.888159 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0"} err="failed to get container status \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d63af0b9be9873d1372e38a188a7aa846f98d0fbbede73d2123552f7bc2c1a0\": not found" Jan 30 13:31:10.889720 kubelet[2830]: I0130 13:31:10.888174 2830 scope.go:117] "RemoveContainer" containerID="cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735" Jan 30 13:31:10.891075 containerd[1492]: time="2025-01-30T13:31:10.891030891Z" level=info msg="RemoveContainer for \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\"" Jan 30 13:31:10.906557 containerd[1492]: time="2025-01-30T13:31:10.903886821Z" level=info msg="RemoveContainer for \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\" returns successfully" Jan 30 13:31:10.906557 containerd[1492]: time="2025-01-30T13:31:10.904461310Z" level=error msg="ContainerStatus for \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\": not found" Jan 30 13:31:10.906749 kubelet[2830]: I0130 13:31:10.904195 2830 scope.go:117] "RemoveContainer" containerID="cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735" Jan 30 13:31:10.906749 kubelet[2830]: E0130 13:31:10.904673 2830 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\": not found" containerID="cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735" Jan 30 13:31:10.906749 kubelet[2830]: I0130 13:31:10.904706 2830 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735"} err="failed to get container status \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\": rpc error: code = NotFound desc = an error occurred when try to find container \"cffd7afa3d73526b9e1c3c2cce6f5578f435df147ef50c69413173f6685db735\": not found" Jan 30 13:31:11.154743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98-rootfs.mount: Deactivated successfully. Jan 30 13:31:11.154861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72-rootfs.mount: Deactivated successfully. Jan 30 13:31:11.154918 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72-shm.mount: Deactivated successfully. Jan 30 13:31:11.154976 systemd[1]: var-lib-kubelet-pods-4cef61b5\x2da560\x2d4b0f\x2db7cb\x2dc966ab39c322-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkjkm9.mount: Deactivated successfully. Jan 30 13:31:11.155028 systemd[1]: var-lib-kubelet-pods-250d3552\x2d7dbe\x2d45aa\x2d926c\x2d9aed6f589159-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4t2d6.mount: Deactivated successfully. Jan 30 13:31:11.155084 systemd[1]: var-lib-kubelet-pods-250d3552\x2d7dbe\x2d45aa\x2d926c\x2d9aed6f589159-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:31:11.155137 systemd[1]: var-lib-kubelet-pods-250d3552\x2d7dbe\x2d45aa\x2d926c\x2d9aed6f589159-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:31:11.886877 kubelet[2830]: E0130 13:31:11.886324 2830 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:31:12.198657 sshd[4422]: Connection closed by 139.178.68.195 port 57264 Jan 30 13:31:12.199949 sshd-session[4420]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:12.205704 systemd[1]: sshd@21-78.46.226.143:22-139.178.68.195:57264.service: Deactivated successfully. Jan 30 13:31:12.211016 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:31:12.213919 systemd[1]: session-20.scope: Consumed 1.502s CPU time. Jan 30 13:31:12.216677 systemd-logind[1463]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:31:12.219137 systemd-logind[1463]: Removed session 20. Jan 30 13:31:12.377877 systemd[1]: Started sshd@22-78.46.226.143:22-139.178.68.195:57278.service - OpenSSH per-connection server daemon (139.178.68.195:57278). Jan 30 13:31:12.676968 kubelet[2830]: E0130 13:31:12.674932 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-zxtjh" podUID="85ed8820-dcbe-4c11-bbfd-359aded16a92" Jan 30 13:31:12.679009 kubelet[2830]: I0130 13:31:12.678967 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="250d3552-7dbe-45aa-926c-9aed6f589159" path="/var/lib/kubelet/pods/250d3552-7dbe-45aa-926c-9aed6f589159/volumes" Jan 30 13:31:12.682098 kubelet[2830]: I0130 13:31:12.682050 2830 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cef61b5-a560-4b0f-b7cb-c966ab39c322" path="/var/lib/kubelet/pods/4cef61b5-a560-4b0f-b7cb-c966ab39c322/volumes" Jan 30 13:31:12.793857 kubelet[2830]: I0130 13:31:12.792741 2830 setters.go:580] "Node became not ready" node="ci-4186-1-0-1-a1b218d3fd" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:31:12Z","lastTransitionTime":"2025-01-30T13:31:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:31:13.387593 sshd[4589]: Accepted publickey for core from 139.178.68.195 port 57278 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:13.391125 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:13.409937 systemd-logind[1463]: New session 21 of user core. Jan 30 13:31:13.418807 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:31:14.676115 kubelet[2830]: E0130 13:31:14.676025 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-zxtjh" podUID="85ed8820-dcbe-4c11-bbfd-359aded16a92" Jan 30 13:31:15.249000 kubelet[2830]: I0130 13:31:15.248140 2830 topology_manager.go:215] "Topology Admit Handler" podUID="e1c11e19-b000-4175-8ef0-b7a77bf2dff2" podNamespace="kube-system" podName="cilium-j69tf" Jan 30 13:31:15.249000 kubelet[2830]: E0130 13:31:15.248217 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250d3552-7dbe-45aa-926c-9aed6f589159" containerName="clean-cilium-state" Jan 30 13:31:15.249000 kubelet[2830]: E0130 13:31:15.248242 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250d3552-7dbe-45aa-926c-9aed6f589159" containerName="cilium-agent" Jan 30 13:31:15.249000 kubelet[2830]: E0130 13:31:15.248251 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250d3552-7dbe-45aa-926c-9aed6f589159" containerName="apply-sysctl-overwrites" Jan 30 13:31:15.249000 kubelet[2830]: E0130 13:31:15.248257 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4cef61b5-a560-4b0f-b7cb-c966ab39c322" containerName="cilium-operator" Jan 30 13:31:15.249000 kubelet[2830]: E0130 13:31:15.248263 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250d3552-7dbe-45aa-926c-9aed6f589159" containerName="mount-cgroup" Jan 30 13:31:15.249000 kubelet[2830]: E0130 13:31:15.248269 2830 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="250d3552-7dbe-45aa-926c-9aed6f589159" containerName="mount-bpf-fs" Jan 30 13:31:15.249000 kubelet[2830]: I0130 13:31:15.248292 2830 memory_manager.go:354] "RemoveStaleState removing state" podUID="250d3552-7dbe-45aa-926c-9aed6f589159" containerName="cilium-agent" Jan 30 13:31:15.249000 kubelet[2830]: I0130 13:31:15.248298 2830 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cef61b5-a560-4b0f-b7cb-c966ab39c322" containerName="cilium-operator" Jan 30 13:31:15.260500 systemd[1]: Created slice kubepods-burstable-pode1c11e19_b000_4175_8ef0_b7a77bf2dff2.slice - libcontainer container kubepods-burstable-pode1c11e19_b000_4175_8ef0_b7a77bf2dff2.slice. Jan 30 13:31:15.271893 kubelet[2830]: W0130 13:31:15.271844 2830 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-0-1-a1b218d3fd" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-1-a1b218d3fd' and this object Jan 30 13:31:15.272102 kubelet[2830]: E0130 13:31:15.272088 2830 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4186-1-0-1-a1b218d3fd" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-0-1-a1b218d3fd' and this object Jan 30 13:31:15.318853 kubelet[2830]: I0130 13:31:15.318793 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-cni-path\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.319122 kubelet[2830]: I0130 13:31:15.319063 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-cilium-config-path\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.320539 kubelet[2830]: I0130 13:31:15.320434 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-host-proc-sys-kernel\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.320908 kubelet[2830]: I0130 13:31:15.320759 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc6cq\" (UniqueName: \"kubernetes.io/projected/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-kube-api-access-mc6cq\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.320908 kubelet[2830]: I0130 13:31:15.320797 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-etc-cni-netd\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.320908 kubelet[2830]: I0130 13:31:15.320853 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-cilium-cgroup\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.320908 kubelet[2830]: I0130 13:31:15.320872 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-clustermesh-secrets\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.320908 kubelet[2830]: I0130 13:31:15.320890 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-hubble-tls\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.321236 kubelet[2830]: I0130 13:31:15.321091 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-hostproc\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.321236 kubelet[2830]: I0130 13:31:15.321117 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-xtables-lock\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.321236 kubelet[2830]: I0130 13:31:15.321160 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-cilium-ipsec-secrets\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.321236 kubelet[2830]: I0130 13:31:15.321179 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-host-proc-sys-net\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.321444 kubelet[2830]: I0130 13:31:15.321200 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-cilium-run\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.321444 kubelet[2830]: I0130 13:31:15.321371 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-bpf-maps\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.321444 kubelet[2830]: I0130 13:31:15.321393 2830 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1c11e19-b000-4175-8ef0-b7a77bf2dff2-lib-modules\") pod \"cilium-j69tf\" (UID: \"e1c11e19-b000-4175-8ef0-b7a77bf2dff2\") " pod="kube-system/cilium-j69tf" Jan 30 13:31:15.405251 sshd[4591]: Connection closed by 139.178.68.195 port 57278 Jan 30 13:31:15.405060 sshd-session[4589]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:15.411690 systemd[1]: sshd@22-78.46.226.143:22-139.178.68.195:57278.service: Deactivated successfully. Jan 30 13:31:15.417202 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:31:15.418041 systemd[1]: session-21.scope: Consumed 1.191s CPU time. Jan 30 13:31:15.422858 systemd-logind[1463]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:31:15.424545 systemd-logind[1463]: Removed session 21. Jan 30 13:31:15.583102 systemd[1]: Started sshd@23-78.46.226.143:22-139.178.68.195:41118.service - OpenSSH per-connection server daemon (139.178.68.195:41118). Jan 30 13:31:16.169640 containerd[1492]: time="2025-01-30T13:31:16.168803221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j69tf,Uid:e1c11e19-b000-4175-8ef0-b7a77bf2dff2,Namespace:kube-system,Attempt:0,}" Jan 30 13:31:16.215522 containerd[1492]: time="2025-01-30T13:31:16.214501466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:31:16.215522 containerd[1492]: time="2025-01-30T13:31:16.214844552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:31:16.215522 containerd[1492]: time="2025-01-30T13:31:16.214862832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:31:16.215522 containerd[1492]: time="2025-01-30T13:31:16.215128836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:31:16.243011 systemd[1]: Started cri-containerd-4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857.scope - libcontainer container 4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857. Jan 30 13:31:16.280669 containerd[1492]: time="2025-01-30T13:31:16.280584395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j69tf,Uid:e1c11e19-b000-4175-8ef0-b7a77bf2dff2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\"" Jan 30 13:31:16.286857 containerd[1492]: time="2025-01-30T13:31:16.286799574Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:31:16.301386 containerd[1492]: time="2025-01-30T13:31:16.301316645Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6a97cc76e4de5999fb609e7f2e0a11d25f6df3a9802a00b3426c1f29d694437b\"" Jan 30 13:31:16.302910 containerd[1492]: time="2025-01-30T13:31:16.302859829Z" level=info msg="StartContainer for \"6a97cc76e4de5999fb609e7f2e0a11d25f6df3a9802a00b3426c1f29d694437b\"" Jan 30 13:31:16.332747 systemd[1]: Started cri-containerd-6a97cc76e4de5999fb609e7f2e0a11d25f6df3a9802a00b3426c1f29d694437b.scope - libcontainer container 6a97cc76e4de5999fb609e7f2e0a11d25f6df3a9802a00b3426c1f29d694437b. Jan 30 13:31:16.369033 containerd[1492]: time="2025-01-30T13:31:16.368980559Z" level=info msg="StartContainer for \"6a97cc76e4de5999fb609e7f2e0a11d25f6df3a9802a00b3426c1f29d694437b\" returns successfully" Jan 30 13:31:16.377949 systemd[1]: cri-containerd-6a97cc76e4de5999fb609e7f2e0a11d25f6df3a9802a00b3426c1f29d694437b.scope: Deactivated successfully. Jan 30 13:31:16.424563 containerd[1492]: time="2025-01-30T13:31:16.424100954Z" level=info msg="shim disconnected" id=6a97cc76e4de5999fb609e7f2e0a11d25f6df3a9802a00b3426c1f29d694437b namespace=k8s.io Jan 30 13:31:16.424563 containerd[1492]: time="2025-01-30T13:31:16.424209236Z" level=warning msg="cleaning up after shim disconnected" id=6a97cc76e4de5999fb609e7f2e0a11d25f6df3a9802a00b3426c1f29d694437b namespace=k8s.io Jan 30 13:31:16.424563 containerd[1492]: time="2025-01-30T13:31:16.424227316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:16.580861 sshd[4605]: Accepted publickey for core from 139.178.68.195 port 41118 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:16.583081 sshd-session[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:16.590072 systemd-logind[1463]: New session 22 of user core. Jan 30 13:31:16.595750 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:31:16.675738 kubelet[2830]: E0130 13:31:16.675146 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-zxtjh" podUID="85ed8820-dcbe-4c11-bbfd-359aded16a92" Jan 30 13:31:16.848937 containerd[1492]: time="2025-01-30T13:31:16.848872417Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:31:16.865017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017189543.mount: Deactivated successfully. Jan 30 13:31:16.874776 containerd[1492]: time="2025-01-30T13:31:16.874721708Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8\"" Jan 30 13:31:16.877097 containerd[1492]: time="2025-01-30T13:31:16.875931447Z" level=info msg="StartContainer for \"17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8\"" Jan 30 13:31:16.887814 kubelet[2830]: E0130 13:31:16.887770 2830 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:31:16.918964 systemd[1]: Started cri-containerd-17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8.scope - libcontainer container 17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8. Jan 30 13:31:16.959126 containerd[1492]: time="2025-01-30T13:31:16.958957005Z" level=info msg="StartContainer for \"17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8\" returns successfully" Jan 30 13:31:16.981015 systemd[1]: cri-containerd-17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8.scope: Deactivated successfully. Jan 30 13:31:17.014549 containerd[1492]: time="2025-01-30T13:31:17.014417564Z" level=info msg="shim disconnected" id=17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8 namespace=k8s.io Jan 30 13:31:17.014549 containerd[1492]: time="2025-01-30T13:31:17.014546166Z" level=warning msg="cleaning up after shim disconnected" id=17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8 namespace=k8s.io Jan 30 13:31:17.014952 containerd[1492]: time="2025-01-30T13:31:17.014561727Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:17.266939 sshd[4710]: Connection closed by 139.178.68.195 port 41118 Jan 30 13:31:17.267985 sshd-session[4605]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:17.273004 systemd[1]: sshd@23-78.46.226.143:22-139.178.68.195:41118.service: Deactivated successfully. Jan 30 13:31:17.275293 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:31:17.278565 systemd-logind[1463]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:31:17.280094 systemd-logind[1463]: Removed session 22. Jan 30 13:31:17.433891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17f6fcff8e2579d367da75d3af3e431a5a7712a1e39a30080e552c313b0495c8-rootfs.mount: Deactivated successfully. Jan 30 13:31:17.440954 systemd[1]: Started sshd@24-78.46.226.143:22-139.178.68.195:41122.service - OpenSSH per-connection server daemon (139.178.68.195:41122). Jan 30 13:31:17.857204 containerd[1492]: time="2025-01-30T13:31:17.857140122Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:31:17.872989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2074922676.mount: Deactivated successfully. Jan 30 13:31:17.877157 containerd[1492]: time="2025-01-30T13:31:17.877055277Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90\"" Jan 30 13:31:17.880303 containerd[1492]: time="2025-01-30T13:31:17.880202407Z" level=info msg="StartContainer for \"f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90\"" Jan 30 13:31:17.910794 systemd[1]: Started cri-containerd-f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90.scope - libcontainer container f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90. Jan 30 13:31:17.950768 containerd[1492]: time="2025-01-30T13:31:17.950710161Z" level=info msg="StartContainer for \"f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90\" returns successfully" Jan 30 13:31:17.955207 systemd[1]: cri-containerd-f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90.scope: Deactivated successfully. Jan 30 13:31:17.988721 containerd[1492]: time="2025-01-30T13:31:17.988575480Z" level=info msg="shim disconnected" id=f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90 namespace=k8s.io Jan 30 13:31:17.988721 containerd[1492]: time="2025-01-30T13:31:17.988674561Z" level=warning msg="cleaning up after shim disconnected" id=f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90 namespace=k8s.io Jan 30 13:31:17.988721 containerd[1492]: time="2025-01-30T13:31:17.988694361Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:18.006017 containerd[1492]: time="2025-01-30T13:31:18.005952394Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:31:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:31:18.436506 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2cc493646e4410b4a20ebaad05ced1bfb786a5358313b5867e03763c9f94b90-rootfs.mount: Deactivated successfully. Jan 30 13:31:18.441907 sshd[4774]: Accepted publickey for core from 139.178.68.195 port 41122 ssh2: RSA SHA256:RAqiXcD7auv4NtIWZl6x8O0m1t6BnLWhbotdWAXUAIk Jan 30 13:31:18.440904 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:31:18.453401 systemd-logind[1463]: New session 23 of user core. Jan 30 13:31:18.461159 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:31:18.674959 kubelet[2830]: E0130 13:31:18.674535 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-zxtjh" podUID="85ed8820-dcbe-4c11-bbfd-359aded16a92" Jan 30 13:31:18.858600 containerd[1492]: time="2025-01-30T13:31:18.857525471Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:31:18.883055 containerd[1492]: time="2025-01-30T13:31:18.882891830Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef\"" Jan 30 13:31:18.884945 containerd[1492]: time="2025-01-30T13:31:18.884853821Z" level=info msg="StartContainer for \"fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef\"" Jan 30 13:31:18.932834 systemd[1]: Started cri-containerd-fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef.scope - libcontainer container fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef. Jan 30 13:31:18.974509 systemd[1]: cri-containerd-fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef.scope: Deactivated successfully. Jan 30 13:31:18.980617 containerd[1492]: time="2025-01-30T13:31:18.978976062Z" level=info msg="StartContainer for \"fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef\" returns successfully" Jan 30 13:31:19.011827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef-rootfs.mount: Deactivated successfully. Jan 30 13:31:19.021809 containerd[1492]: time="2025-01-30T13:31:19.021616211Z" level=info msg="shim disconnected" id=fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef namespace=k8s.io Jan 30 13:31:19.021809 containerd[1492]: time="2025-01-30T13:31:19.021767214Z" level=warning msg="cleaning up after shim disconnected" id=fa22e4b4a6f35c054bbecd850016fc376604de121cdb7166887302150ff0e3ef namespace=k8s.io Jan 30 13:31:19.022763 containerd[1492]: time="2025-01-30T13:31:19.021782894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:31:19.876316 containerd[1492]: time="2025-01-30T13:31:19.875901871Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:31:19.925084 containerd[1492]: time="2025-01-30T13:31:19.924914519Z" level=info msg="CreateContainer within sandbox \"4aee589b4eeb9cea0948cc3b2de3aabe307d0ead8ef89879b35168b3d47ff857\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bbc6f996a5b3618731b1c7132c72f3b7fd3ae247bed7e1b892dbed6a2401c7e9\"" Jan 30 13:31:19.925856 containerd[1492]: time="2025-01-30T13:31:19.925687371Z" level=info msg="StartContainer for \"bbc6f996a5b3618731b1c7132c72f3b7fd3ae247bed7e1b892dbed6a2401c7e9\"" Jan 30 13:31:19.957706 systemd[1]: run-containerd-runc-k8s.io-bbc6f996a5b3618731b1c7132c72f3b7fd3ae247bed7e1b892dbed6a2401c7e9-runc.M5TDCq.mount: Deactivated successfully. Jan 30 13:31:19.967749 systemd[1]: Started cri-containerd-bbc6f996a5b3618731b1c7132c72f3b7fd3ae247bed7e1b892dbed6a2401c7e9.scope - libcontainer container bbc6f996a5b3618731b1c7132c72f3b7fd3ae247bed7e1b892dbed6a2401c7e9. Jan 30 13:31:20.002078 containerd[1492]: time="2025-01-30T13:31:20.001930765Z" level=info msg="StartContainer for \"bbc6f996a5b3618731b1c7132c72f3b7fd3ae247bed7e1b892dbed6a2401c7e9\" returns successfully" Jan 30 13:31:20.362503 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:31:20.676282 kubelet[2830]: E0130 13:31:20.674918 2830 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-zxtjh" podUID="85ed8820-dcbe-4c11-bbfd-359aded16a92" Jan 30 13:31:21.184723 systemd[1]: run-containerd-runc-k8s.io-bbc6f996a5b3618731b1c7132c72f3b7fd3ae247bed7e1b892dbed6a2401c7e9-runc.r3IEv5.mount: Deactivated successfully. Jan 30 13:31:23.359543 systemd[1]: run-containerd-runc-k8s.io-bbc6f996a5b3618731b1c7132c72f3b7fd3ae247bed7e1b892dbed6a2401c7e9-runc.SUhK6b.mount: Deactivated successfully. Jan 30 13:31:23.747909 systemd-networkd[1386]: lxc_health: Link UP Jan 30 13:31:23.761270 systemd-networkd[1386]: lxc_health: Gained carrier Jan 30 13:31:24.202724 kubelet[2830]: I0130 13:31:24.202337 2830 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j69tf" podStartSLOduration=9.202316588 podStartE2EDuration="9.202316588s" podCreationTimestamp="2025-01-30 13:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:31:20.907721168 +0000 UTC m=+354.364827303" watchObservedRunningTime="2025-01-30 13:31:24.202316588 +0000 UTC m=+357.659422683" Jan 30 13:31:25.794783 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 30 13:31:26.719990 containerd[1492]: time="2025-01-30T13:31:26.719844947Z" level=info msg="StopPodSandbox for \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\"" Jan 30 13:31:26.719990 containerd[1492]: time="2025-01-30T13:31:26.719982429Z" level=info msg="TearDown network for sandbox \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\" successfully" Jan 30 13:31:26.719990 containerd[1492]: time="2025-01-30T13:31:26.719996069Z" level=info msg="StopPodSandbox for \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\" returns successfully" Jan 30 13:31:26.725187 containerd[1492]: time="2025-01-30T13:31:26.723421521Z" level=info msg="RemovePodSandbox for \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\"" Jan 30 13:31:26.725187 containerd[1492]: time="2025-01-30T13:31:26.723825447Z" level=info msg="Forcibly stopping sandbox \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\"" Jan 30 13:31:26.725187 containerd[1492]: time="2025-01-30T13:31:26.723926809Z" level=info msg="TearDown network for sandbox \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\" successfully" Jan 30 13:31:26.735818 containerd[1492]: time="2025-01-30T13:31:26.735550505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:31:26.735818 containerd[1492]: time="2025-01-30T13:31:26.735647347Z" level=info msg="RemovePodSandbox \"adc79cd017b97089b72279b737a32ad576f36d72e1e9b5f065c5da0c71b89b98\" returns successfully" Jan 30 13:31:26.737399 containerd[1492]: time="2025-01-30T13:31:26.736926766Z" level=info msg="StopPodSandbox for \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\"" Jan 30 13:31:26.737399 containerd[1492]: time="2025-01-30T13:31:26.737044248Z" level=info msg="TearDown network for sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" successfully" Jan 30 13:31:26.737399 containerd[1492]: time="2025-01-30T13:31:26.737054848Z" level=info msg="StopPodSandbox for \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" returns successfully" Jan 30 13:31:26.737666 containerd[1492]: time="2025-01-30T13:31:26.737563176Z" level=info msg="RemovePodSandbox for \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\"" Jan 30 13:31:26.737666 containerd[1492]: time="2025-01-30T13:31:26.737601937Z" level=info msg="Forcibly stopping sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\"" Jan 30 13:31:26.737714 containerd[1492]: time="2025-01-30T13:31:26.737669738Z" level=info msg="TearDown network for sandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" successfully" Jan 30 13:31:26.743877 containerd[1492]: time="2025-01-30T13:31:26.743792071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:31:26.744030 containerd[1492]: time="2025-01-30T13:31:26.743902912Z" level=info msg="RemovePodSandbox \"3a171544903f2a412395e4a54ab4e652b14327aead95723412097c0a5b7f6e72\" returns successfully" Jan 30 13:31:30.139856 sshd[4837]: Connection closed by 139.178.68.195 port 41122 Jan 30 13:31:30.143086 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Jan 30 13:31:30.150375 systemd[1]: sshd@24-78.46.226.143:22-139.178.68.195:41122.service: Deactivated successfully. Jan 30 13:31:30.154644 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:31:30.156653 systemd-logind[1463]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:31:30.158927 systemd-logind[1463]: Removed session 23.