Jan 29 11:03:00.883066 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:03:00.883102 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:03:00.883116 kernel: KASLR enabled Jan 29 11:03:00.883123 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 11:03:00.883130 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Jan 29 11:03:00.883136 kernel: random: crng init done Jan 29 11:03:00.883145 kernel: secureboot: Secure boot disabled Jan 29 11:03:00.883151 kernel: ACPI: Early table checksum verification disabled Jan 29 11:03:00.883158 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 11:03:00.883167 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:03:00.883174 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883181 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883188 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883195 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883203 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883213 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883220 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883227 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883235 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:03:00.883242 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:03:00.883249 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 11:03:00.883256 kernel: NUMA: Failed to initialise from firmware Jan 29 11:03:00.883263 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:03:00.883271 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Jan 29 11:03:00.883278 kernel: Zone ranges: Jan 29 11:03:00.883287 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 11:03:00.883294 kernel: DMA32 empty Jan 29 11:03:00.883301 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 11:03:00.883308 kernel: Movable zone start for each node Jan 29 11:03:00.883315 kernel: Early memory node ranges Jan 29 11:03:00.883323 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 29 11:03:00.883330 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 11:03:00.883337 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 11:03:00.883344 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 11:03:00.883351 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 11:03:00.883358 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 11:03:00.883366 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 11:03:00.883374 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:03:00.883382 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 11:03:00.883389 kernel: psci: probing for conduit method from ACPI. Jan 29 11:03:00.883399 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:03:00.883407 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:03:00.883415 kernel: psci: Trusted OS migration not required Jan 29 11:03:00.883425 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:03:00.883432 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:03:00.883440 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:03:00.883448 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:03:00.883456 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:03:00.883463 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:03:00.883471 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:03:00.883479 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:03:00.883486 kernel: CPU features: detected: Spectre-v4 Jan 29 11:03:00.883494 kernel: CPU features: detected: Spectre-BHB Jan 29 11:03:00.883503 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:03:00.883511 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:03:00.883518 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:03:00.883526 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:03:00.883534 kernel: alternatives: applying boot alternatives Jan 29 11:03:00.883543 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:03:00.883551 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:03:00.883559 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:03:00.883566 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:03:00.883574 kernel: Fallback order for Node 0: 0 Jan 29 11:03:00.883582 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 11:03:00.883591 kernel: Policy zone: Normal Jan 29 11:03:00.883599 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:03:00.883606 kernel: software IO TLB: area num 2. Jan 29 11:03:00.883614 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 11:03:00.883622 kernel: Memory: 3882676K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 213324K reserved, 0K cma-reserved) Jan 29 11:03:00.883630 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:03:00.883638 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:03:00.883646 kernel: rcu: RCU event tracing is enabled. Jan 29 11:03:00.883654 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:03:00.883662 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:03:00.883670 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:03:00.883677 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:03:00.883687 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:03:00.883695 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:03:00.883716 kernel: GICv3: 256 SPIs implemented Jan 29 11:03:00.883725 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:03:00.883732 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:03:00.883740 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:03:00.883748 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:03:00.883755 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:03:00.883763 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:03:00.883771 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:03:00.883779 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 11:03:00.883832 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 11:03:00.883841 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:03:00.883849 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:03:00.883857 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:03:00.883865 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:03:00.883873 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:03:00.883880 kernel: Console: colour dummy device 80x25 Jan 29 11:03:00.883888 kernel: ACPI: Core revision 20230628 Jan 29 11:03:00.883897 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:03:00.883905 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:03:00.883915 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:03:00.883923 kernel: landlock: Up and running. Jan 29 11:03:00.883931 kernel: SELinux: Initializing. Jan 29 11:03:00.883938 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:03:00.883946 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:03:00.883954 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:03:00.883963 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:03:00.883971 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:03:00.883979 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:03:00.883986 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:03:00.883996 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:03:00.884004 kernel: Remapping and enabling EFI services. Jan 29 11:03:00.884012 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:03:00.884020 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:03:00.884028 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:03:00.884036 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 11:03:00.884044 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:03:00.884051 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:03:00.884059 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:03:00.884069 kernel: SMP: Total of 2 processors activated. Jan 29 11:03:00.884077 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:03:00.884118 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:03:00.884133 kernel: CPU features: detected: Common not Private translations Jan 29 11:03:00.884142 kernel: CPU features: detected: CRC32 instructions Jan 29 11:03:00.884151 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:03:00.884159 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:03:00.884168 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:03:00.884176 kernel: CPU features: detected: Privileged Access Never Jan 29 11:03:00.884187 kernel: CPU features: detected: RAS Extension Support Jan 29 11:03:00.884195 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:03:00.884204 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:03:00.884212 kernel: alternatives: applying system-wide alternatives Jan 29 11:03:00.884233 kernel: devtmpfs: initialized Jan 29 11:03:00.884242 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:03:00.884250 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:03:00.884259 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:03:00.884270 kernel: SMBIOS 3.0.0 present. Jan 29 11:03:00.884279 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 11:03:00.884287 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:03:00.884296 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:03:00.884305 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:03:00.884313 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:03:00.884322 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:03:00.884331 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jan 29 11:03:00.884341 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:03:00.884349 kernel: cpuidle: using governor menu Jan 29 11:03:00.884358 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:03:00.884366 kernel: ASID allocator initialised with 32768 entries Jan 29 11:03:00.884375 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:03:00.884383 kernel: Serial: AMBA PL011 UART driver Jan 29 11:03:00.884391 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:03:00.884400 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:03:00.884409 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:03:00.884418 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:03:00.884428 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:03:00.884437 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:03:00.884445 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:03:00.884453 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:03:00.884462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:03:00.884471 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:03:00.884479 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:03:00.884487 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:03:00.884494 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:03:00.884503 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:03:00.884510 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:03:00.884530 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:03:00.884537 kernel: ACPI: Interpreter enabled Jan 29 11:03:00.884544 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:03:00.884551 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:03:00.884558 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:03:00.884565 kernel: printk: console [ttyAMA0] enabled Jan 29 11:03:00.884572 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:03:00.884753 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:03:00.884833 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:03:00.884898 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:03:00.884961 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:03:00.885023 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:03:00.885033 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:03:00.885040 kernel: PCI host bridge to bus 0000:00 Jan 29 11:03:00.885129 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:03:00.885189 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:03:00.885246 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:03:00.885302 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:03:00.885382 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:03:00.885462 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 11:03:00.885532 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 11:03:00.885598 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:03:00.885668 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.886851 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 11:03:00.886943 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.887009 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 11:03:00.887081 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.887175 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 11:03:00.887248 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.887313 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 11:03:00.887387 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.887451 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 11:03:00.887525 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.887590 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 11:03:00.887660 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.887744 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 11:03:00.887825 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.887890 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 11:03:00.887960 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:03:00.888028 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 11:03:00.888111 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 11:03:00.888182 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 11:03:00.888256 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:03:00.888323 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 11:03:00.888390 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:03:00.888461 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:03:00.888533 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 11:03:00.888600 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 11:03:00.888673 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 11:03:00.889076 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 11:03:00.889211 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 11:03:00.889290 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 11:03:00.889384 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 11:03:00.889460 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 11:03:00.889527 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 11:03:00.889594 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 11:03:00.889666 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 11:03:00.891869 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 11:03:00.891963 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:03:00.892068 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:03:00.892158 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 11:03:00.892229 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 11:03:00.892295 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:03:00.892363 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 11:03:00.892432 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:03:00.892496 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:03:00.892563 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 11:03:00.892627 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 11:03:00.892689 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 11:03:00.892788 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 11:03:00.892853 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:03:00.892916 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:03:00.892989 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 11:03:00.893053 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 11:03:00.893128 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 11:03:00.893195 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 11:03:00.893258 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:03:00.893321 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:03:00.893388 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:03:00.893454 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:03:00.893517 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:03:00.893583 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:03:00.893646 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:03:00.893722 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:03:00.893790 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:03:00.893854 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:03:00.893916 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:03:00.893986 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:03:00.894049 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:03:00.894150 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:03:00.894219 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 11:03:00.894283 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:03:00.894347 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 11:03:00.894411 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:03:00.894480 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 11:03:00.894544 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:03:00.894607 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 11:03:00.894670 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:03:00.895521 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 11:03:00.895603 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:03:00.895674 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 11:03:00.895765 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:03:00.895835 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 11:03:00.895900 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:03:00.895965 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 11:03:00.896029 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:03:00.896127 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 11:03:00.896215 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:03:00.896287 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 11:03:00.896352 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 11:03:00.896416 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 11:03:00.896480 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 11:03:00.896545 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 11:03:00.896613 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 11:03:00.896686 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 11:03:00.896777 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 11:03:00.896849 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 11:03:00.896921 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 11:03:00.896991 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 11:03:00.897061 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 11:03:00.897141 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 11:03:00.897208 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 11:03:00.897272 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 11:03:00.897340 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 11:03:00.897406 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 11:03:00.897469 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 11:03:00.897537 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 11:03:00.897600 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 11:03:00.897668 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 11:03:00.897888 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 11:03:00.897964 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:03:00.898034 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 11:03:00.898134 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 11:03:00.898210 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 11:03:00.898274 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 11:03:00.898336 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:03:00.898406 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 11:03:00.898474 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 11:03:00.898537 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 11:03:00.898600 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 11:03:00.898662 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:03:00.898747 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:03:00.898815 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 11:03:00.898882 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 11:03:00.898943 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 11:03:00.899012 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 11:03:00.899074 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:03:00.899162 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:03:00.899228 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 11:03:00.899289 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 11:03:00.899351 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 11:03:00.899416 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:03:00.899492 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 11:03:00.899559 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 11:03:00.899621 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 11:03:00.899684 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 11:03:00.902871 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 11:03:00.902952 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:03:00.903024 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 11:03:00.903140 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 11:03:00.903220 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 11:03:00.903283 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 11:03:00.903348 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 11:03:00.903412 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:03:00.903487 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 11:03:00.903552 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 11:03:00.903622 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 11:03:00.903723 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 11:03:00.903791 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 11:03:00.903855 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 11:03:00.903917 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:03:00.903983 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 11:03:00.904047 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 11:03:00.904129 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 11:03:00.904196 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:03:00.904268 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 11:03:00.904331 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 11:03:00.904394 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 11:03:00.904457 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:03:00.904522 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:03:00.904580 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:03:00.904635 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:03:00.906200 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 11:03:00.906299 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 11:03:00.906360 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:03:00.906427 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 11:03:00.906487 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 11:03:00.906544 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:03:00.906612 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 11:03:00.906679 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 11:03:00.906845 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:03:00.906932 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 11:03:00.907020 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 11:03:00.907081 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:03:00.907163 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 11:03:00.907228 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 11:03:00.907286 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:03:00.907351 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 11:03:00.907410 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 11:03:00.907471 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:03:00.907535 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 11:03:00.907593 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 11:03:00.907653 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:03:00.907781 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 11:03:00.907846 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 11:03:00.907904 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:03:00.907972 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 11:03:00.908029 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 11:03:00.908087 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:03:00.908106 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:03:00.908115 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:03:00.908123 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:03:00.908131 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:03:00.908138 kernel: iommu: Default domain type: Translated Jan 29 11:03:00.908149 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:03:00.908156 kernel: efivars: Registered efivars operations Jan 29 11:03:00.908164 kernel: vgaarb: loaded Jan 29 11:03:00.908172 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:03:00.908180 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:03:00.908188 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:03:00.908195 kernel: pnp: PnP ACPI init Jan 29 11:03:00.908270 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:03:00.908281 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:03:00.908292 kernel: NET: Registered PF_INET protocol family Jan 29 11:03:00.908300 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:03:00.908308 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:03:00.908316 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:03:00.908323 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:03:00.908331 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:03:00.908339 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:03:00.908347 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:03:00.908359 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:03:00.908368 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:03:00.908439 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 11:03:00.908455 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:03:00.908466 kernel: kvm [1]: HYP mode not available Jan 29 11:03:00.908473 kernel: Initialise system trusted keyrings Jan 29 11:03:00.908481 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:03:00.908489 kernel: Key type asymmetric registered Jan 29 11:03:00.908496 kernel: Asymmetric key parser 'x509' registered Jan 29 11:03:00.908506 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:03:00.908514 kernel: io scheduler mq-deadline registered Jan 29 11:03:00.908522 kernel: io scheduler kyber registered Jan 29 11:03:00.908529 kernel: io scheduler bfq registered Jan 29 11:03:00.908538 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 11:03:00.908610 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 11:03:00.908677 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 11:03:00.908792 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.908881 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 11:03:00.908964 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 11:03:00.909031 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.909111 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 11:03:00.909179 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 11:03:00.909242 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.909311 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 11:03:00.909375 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 11:03:00.909439 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.909504 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 11:03:00.909577 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 11:03:00.909643 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.909779 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 11:03:00.909852 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 11:03:00.909914 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.909978 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 11:03:00.910040 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 11:03:00.910140 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.910220 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 11:03:00.910283 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 11:03:00.910346 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.910356 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 11:03:00.910424 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 11:03:00.910489 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 11:03:00.910555 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:03:00.910565 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:03:00.910574 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:03:00.910581 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:03:00.910648 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 11:03:00.910731 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 11:03:00.910743 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:03:00.910751 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 11:03:00.910817 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 11:03:00.910831 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 11:03:00.910839 kernel: thunder_xcv, ver 1.0 Jan 29 11:03:00.910846 kernel: thunder_bgx, ver 1.0 Jan 29 11:03:00.910854 kernel: nicpf, ver 1.0 Jan 29 11:03:00.910861 kernel: nicvf, ver 1.0 Jan 29 11:03:00.910934 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:03:00.910997 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:03:00 UTC (1738148580) Jan 29 11:03:00.911007 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:03:00.911018 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:03:00.911026 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:03:00.911033 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:03:00.911041 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:03:00.911048 kernel: Segment Routing with IPv6 Jan 29 11:03:00.911056 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:03:00.911064 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:03:00.911071 kernel: Key type dns_resolver registered Jan 29 11:03:00.911078 kernel: registered taskstats version 1 Jan 29 11:03:00.911088 kernel: Loading compiled-in X.509 certificates Jan 29 11:03:00.911109 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:03:00.911117 kernel: Key type .fscrypt registered Jan 29 11:03:00.911124 kernel: Key type fscrypt-provisioning registered Jan 29 11:03:00.911134 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:03:00.911142 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:03:00.911150 kernel: ima: No architecture policies found Jan 29 11:03:00.911158 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:03:00.911167 kernel: clk: Disabling unused clocks Jan 29 11:03:00.911174 kernel: Freeing unused kernel memory: 39680K Jan 29 11:03:00.911182 kernel: Run /init as init process Jan 29 11:03:00.911189 kernel: with arguments: Jan 29 11:03:00.911197 kernel: /init Jan 29 11:03:00.911204 kernel: with environment: Jan 29 11:03:00.911212 kernel: HOME=/ Jan 29 11:03:00.911219 kernel: TERM=linux Jan 29 11:03:00.911226 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:03:00.911236 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:03:00.911248 systemd[1]: Detected virtualization kvm. Jan 29 11:03:00.911256 systemd[1]: Detected architecture arm64. Jan 29 11:03:00.911264 systemd[1]: Running in initrd. Jan 29 11:03:00.911272 systemd[1]: No hostname configured, using default hostname. Jan 29 11:03:00.911280 systemd[1]: Hostname set to . Jan 29 11:03:00.911288 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:03:00.911296 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:03:00.911306 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:03:00.911314 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:03:00.911322 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:03:00.911331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:03:00.911339 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:03:00.911348 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:03:00.911357 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:03:00.911368 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:03:00.911376 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:03:00.911384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:03:00.911392 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:03:00.911400 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:03:00.911408 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:03:00.911416 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:03:00.911424 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:03:00.911434 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:03:00.911442 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:03:00.911450 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:03:00.911459 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:03:00.911467 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:03:00.911475 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:03:00.911483 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:03:00.911491 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:03:00.911501 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:03:00.911510 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:03:00.911518 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:03:00.911526 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:03:00.911534 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:03:00.911542 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:00.911550 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:03:00.911558 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:03:00.911588 systemd-journald[237]: Collecting audit messages is disabled. Jan 29 11:03:00.911611 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:03:00.911622 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:03:00.911630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:00.911639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:03:00.911648 systemd-journald[237]: Journal started Jan 29 11:03:00.911667 systemd-journald[237]: Runtime Journal (/run/log/journal/c28c27ebbe5c4d6a89542e32256051bf) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:03:00.904769 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 11:03:00.917729 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:03:00.922734 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:03:00.924391 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 11:03:00.925372 kernel: Bridge firewalling registered Jan 29 11:03:00.925052 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:03:00.928245 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:03:00.934970 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:03:00.938074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:03:00.942536 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:03:00.959828 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:03:00.961695 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:03:00.965147 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:03:00.972251 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:03:00.973150 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:03:00.981840 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:03:00.986692 dracut-cmdline[272]: dracut-dracut-053 Jan 29 11:03:00.990986 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:03:01.009823 systemd-resolved[274]: Positive Trust Anchors: Jan 29 11:03:01.009895 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:03:01.009927 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:03:01.015467 systemd-resolved[274]: Defaulting to hostname 'linux'. Jan 29 11:03:01.016492 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:03:01.017464 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:03:01.084753 kernel: SCSI subsystem initialized Jan 29 11:03:01.088747 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:03:01.097141 kernel: iscsi: registered transport (tcp) Jan 29 11:03:01.109754 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:03:01.109826 kernel: QLogic iSCSI HBA Driver Jan 29 11:03:01.158617 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:03:01.167035 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:03:01.187902 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:03:01.187963 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:03:01.188836 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:03:01.236764 kernel: raid6: neonx8 gen() 15609 MB/s Jan 29 11:03:01.253767 kernel: raid6: neonx4 gen() 12180 MB/s Jan 29 11:03:01.270735 kernel: raid6: neonx2 gen() 13081 MB/s Jan 29 11:03:01.287754 kernel: raid6: neonx1 gen() 10223 MB/s Jan 29 11:03:01.304734 kernel: raid6: int64x8 gen() 6574 MB/s Jan 29 11:03:01.321770 kernel: raid6: int64x4 gen() 7287 MB/s Jan 29 11:03:01.338753 kernel: raid6: int64x2 gen() 6070 MB/s Jan 29 11:03:01.355846 kernel: raid6: int64x1 gen() 4901 MB/s Jan 29 11:03:01.355937 kernel: raid6: using algorithm neonx8 gen() 15609 MB/s Jan 29 11:03:01.372763 kernel: raid6: .... xor() 11820 MB/s, rmw enabled Jan 29 11:03:01.372841 kernel: raid6: using neon recovery algorithm Jan 29 11:03:01.377749 kernel: xor: measuring software checksum speed Jan 29 11:03:01.377807 kernel: 8regs : 16543 MB/sec Jan 29 11:03:01.377829 kernel: 32regs : 16925 MB/sec Jan 29 11:03:01.378812 kernel: arm64_neon : 19354 MB/sec Jan 29 11:03:01.378845 kernel: xor: using function: arm64_neon (19354 MB/sec) Jan 29 11:03:01.429776 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:03:01.444182 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:03:01.451914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:03:01.467643 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 29 11:03:01.471027 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:03:01.483934 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:03:01.499105 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 29 11:03:01.534727 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:03:01.539916 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:03:01.590380 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:03:01.601918 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:03:01.619483 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:03:01.621285 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:03:01.622903 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:03:01.623448 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:03:01.633861 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:03:01.645650 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:03:01.677071 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:03:01.687769 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:03:01.687853 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 11:03:01.708064 kernel: ACPI: bus type USB registered Jan 29 11:03:01.708138 kernel: usbcore: registered new interface driver usbfs Jan 29 11:03:01.708152 kernel: usbcore: registered new interface driver hub Jan 29 11:03:01.709033 kernel: usbcore: registered new device driver usb Jan 29 11:03:01.714102 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:03:01.714219 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:03:01.715866 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:03:01.716401 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:03:01.716524 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:01.717133 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:01.728044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:01.742336 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:03:01.761049 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 11:03:01.761213 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 11:03:01.761300 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:03:01.761387 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 11:03:01.761464 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 11:03:01.761542 kernel: hub 1-0:1.0: USB hub found Jan 29 11:03:01.761643 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 11:03:01.761773 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 11:03:01.761863 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:03:01.762074 kernel: hub 1-0:1.0: 4 ports detected Jan 29 11:03:01.762251 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:03:01.762351 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 11:03:01.762456 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 11:03:01.772986 kernel: hub 2-0:1.0: USB hub found Jan 29 11:03:01.773155 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 11:03:01.773259 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 11:03:01.773344 kernel: hub 2-0:1.0: 4 ports detected Jan 29 11:03:01.773424 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 11:03:01.773505 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:03:01.773607 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:03:01.773619 kernel: GPT:17805311 != 80003071 Jan 29 11:03:01.773631 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:03:01.773641 kernel: GPT:17805311 != 80003071 Jan 29 11:03:01.773650 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:03:01.773659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:03:01.773668 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 11:03:01.748221 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:01.758720 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:03:01.788136 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:03:01.815003 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (506) Jan 29 11:03:01.821731 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (505) Jan 29 11:03:01.833866 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 11:03:01.840181 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 11:03:01.846324 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:03:01.850658 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 11:03:01.851863 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 11:03:01.864114 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:03:01.875010 disk-uuid[575]: Primary Header is updated. Jan 29 11:03:01.875010 disk-uuid[575]: Secondary Entries is updated. Jan 29 11:03:01.875010 disk-uuid[575]: Secondary Header is updated. Jan 29 11:03:01.879747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:03:01.998028 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 11:03:02.240863 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 11:03:02.375496 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 11:03:02.375559 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 11:03:02.375856 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 11:03:02.430914 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 11:03:02.431272 kernel: usbcore: registered new interface driver usbhid Jan 29 11:03:02.431298 kernel: usbhid: USB HID core driver Jan 29 11:03:02.889759 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:03:02.890476 disk-uuid[576]: The operation has completed successfully. Jan 29 11:03:02.944327 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:03:02.944439 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:03:02.957889 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:03:02.963765 sh[590]: Success Jan 29 11:03:02.976729 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:03:03.041851 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:03:03.043776 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:03:03.049834 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:03:03.064869 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:03:03.064925 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:03:03.064942 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:03:03.064968 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:03:03.065855 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:03:03.072730 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:03:03.074157 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:03:03.075891 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:03:03.092020 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:03:03.095934 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:03:03.109449 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:03.109513 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:03:03.110527 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:03:03.114855 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:03:03.114899 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:03:03.125868 kernel: BTRFS info (device sda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:03.125698 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:03:03.131281 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:03:03.137949 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:03:03.229642 ignition[679]: Ignition 2.20.0 Jan 29 11:03:03.229657 ignition[679]: Stage: fetch-offline Jan 29 11:03:03.229698 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:03.232852 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:03:03.230333 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:03.230523 ignition[679]: parsed url from cmdline: "" Jan 29 11:03:03.230527 ignition[679]: no config URL provided Jan 29 11:03:03.230531 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:03:03.230539 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:03:03.230544 ignition[679]: failed to fetch config: resource requires networking Jan 29 11:03:03.230897 ignition[679]: Ignition finished successfully Jan 29 11:03:03.239736 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:03:03.246990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:03:03.268880 systemd-networkd[779]: lo: Link UP Jan 29 11:03:03.268888 systemd-networkd[779]: lo: Gained carrier Jan 29 11:03:03.270532 systemd-networkd[779]: Enumeration completed Jan 29 11:03:03.270709 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:03:03.271993 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:03.271997 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:03:03.273221 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:03.273224 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:03:03.273759 systemd[1]: Reached target network.target - Network. Jan 29 11:03:03.274787 systemd-networkd[779]: eth0: Link UP Jan 29 11:03:03.274790 systemd-networkd[779]: eth0: Gained carrier Jan 29 11:03:03.274797 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:03.280027 systemd-networkd[779]: eth1: Link UP Jan 29 11:03:03.280031 systemd-networkd[779]: eth1: Gained carrier Jan 29 11:03:03.280039 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:03.285939 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:03:03.299398 ignition[781]: Ignition 2.20.0 Jan 29 11:03:03.299409 ignition[781]: Stage: fetch Jan 29 11:03:03.299567 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:03.299577 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:03.299672 ignition[781]: parsed url from cmdline: "" Jan 29 11:03:03.299676 ignition[781]: no config URL provided Jan 29 11:03:03.299680 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:03:03.299687 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:03:03.299786 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 11:03:03.300631 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 11:03:03.305794 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:03:03.341805 systemd-networkd[779]: eth0: DHCPv4 address 168.119.110.78/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:03:03.501667 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 11:03:03.508014 ignition[781]: GET result: OK Jan 29 11:03:03.508203 ignition[781]: parsing config with SHA512: 491e6e504f910180c0d521251e520e5ddb44bd33eb075e13cb1c0f9b3cd70e658c8872ed99ddcdf5210a660cc3f4e71bac60d553f19d48c0c34c4e7eb47d2165 Jan 29 11:03:03.514373 unknown[781]: fetched base config from "system" Jan 29 11:03:03.514382 unknown[781]: fetched base config from "system" Jan 29 11:03:03.514771 ignition[781]: fetch: fetch complete Jan 29 11:03:03.514388 unknown[781]: fetched user config from "hetzner" Jan 29 11:03:03.514775 ignition[781]: fetch: fetch passed Jan 29 11:03:03.516484 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:03:03.514814 ignition[781]: Ignition finished successfully Jan 29 11:03:03.523967 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:03:03.537131 ignition[788]: Ignition 2.20.0 Jan 29 11:03:03.537140 ignition[788]: Stage: kargs Jan 29 11:03:03.537304 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:03.537313 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:03.538215 ignition[788]: kargs: kargs passed Jan 29 11:03:03.538265 ignition[788]: Ignition finished successfully Jan 29 11:03:03.541507 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:03:03.546896 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:03:03.559011 ignition[794]: Ignition 2.20.0 Jan 29 11:03:03.559020 ignition[794]: Stage: disks Jan 29 11:03:03.559253 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:03.559263 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:03.560252 ignition[794]: disks: disks passed Jan 29 11:03:03.560299 ignition[794]: Ignition finished successfully Jan 29 11:03:03.561977 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:03:03.563530 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:03:03.564899 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:03:03.565903 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:03:03.567158 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:03:03.568666 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:03:03.577972 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:03:03.594278 systemd-fsck[803]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:03:03.596820 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:03:03.600910 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:03:03.662771 kernel: EXT4-fs (sda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:03:03.663343 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:03:03.664271 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:03:03.673908 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:03:03.677266 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:03:03.679930 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:03:03.682143 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:03:03.683375 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:03:03.688605 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:03:03.695120 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (811) Jan 29 11:03:03.695172 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:03.696029 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:03:03.696050 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:03:03.700786 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:03:03.700851 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:03:03.701051 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:03:03.706794 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:03:03.747297 coreos-metadata[813]: Jan 29 11:03:03.747 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 11:03:03.749234 coreos-metadata[813]: Jan 29 11:03:03.749 INFO Fetch successful Jan 29 11:03:03.751481 coreos-metadata[813]: Jan 29 11:03:03.750 INFO wrote hostname ci-4152-2-0-3-44dff38e5d to /sysroot/etc/hostname Jan 29 11:03:03.753477 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:03:03.756682 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:03:03.761771 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:03:03.767122 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:03:03.771221 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:03:03.867440 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:03:03.878980 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:03:03.884749 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:03:03.889732 kernel: BTRFS info (device sda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:03.910753 ignition[928]: INFO : Ignition 2.20.0 Jan 29 11:03:03.910753 ignition[928]: INFO : Stage: mount Jan 29 11:03:03.910753 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:03.910753 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:03.916147 ignition[928]: INFO : mount: mount passed Jan 29 11:03:03.916147 ignition[928]: INFO : Ignition finished successfully Jan 29 11:03:03.913842 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:03:03.916672 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:03:03.928876 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:03:04.064387 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:03:04.070036 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:03:04.081026 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (940) Jan 29 11:03:04.081121 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:03:04.081151 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:03:04.081763 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:03:04.085085 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:03:04.085131 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:03:04.088002 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:03:04.111086 ignition[957]: INFO : Ignition 2.20.0 Jan 29 11:03:04.111086 ignition[957]: INFO : Stage: files Jan 29 11:03:04.112492 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:04.112492 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:04.112492 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:03:04.115477 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:03:04.115477 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:03:04.117268 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:03:04.118001 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:03:04.119232 unknown[957]: wrote ssh authorized keys file for user: core Jan 29 11:03:04.120464 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:03:04.121482 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:03:04.121482 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 11:03:04.220149 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:03:04.927218 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 11:03:04.927218 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:03:04.927218 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 11:03:05.064948 systemd-networkd[779]: eth0: Gained IPv6LL Jan 29 11:03:05.065486 systemd-networkd[779]: eth1: Gained IPv6LL Jan 29 11:03:05.473048 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:03:05.552298 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:03:05.553455 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 11:03:06.101846 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:03:06.396151 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:03:06.396151 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:03:06.399012 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:03:06.399966 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:03:06.399966 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:03:06.402404 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:03:06.402404 ignition[957]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:03:06.402404 ignition[957]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:03:06.402404 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:03:06.402404 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:03:06.402404 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:03:06.402404 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:03:06.402404 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:03:06.402404 ignition[957]: INFO : files: files passed Jan 29 11:03:06.402404 ignition[957]: INFO : Ignition finished successfully Jan 29 11:03:06.402363 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:03:06.411183 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:03:06.413295 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:03:06.416155 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:03:06.418896 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:03:06.425665 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:03:06.425665 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:03:06.427509 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:03:06.429881 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:03:06.430671 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:03:06.435874 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:03:06.464558 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:03:06.465468 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:03:06.467810 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:03:06.468555 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:03:06.470031 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:03:06.472875 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:03:06.487415 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:03:06.493896 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:03:06.503908 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:03:06.504650 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:03:06.505353 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:03:06.506406 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:03:06.506515 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:03:06.507989 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:03:06.509001 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:03:06.509956 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:03:06.510860 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:03:06.511803 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:03:06.512874 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:03:06.513793 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:03:06.514844 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:03:06.515721 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:03:06.516681 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:03:06.517432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:03:06.517585 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:03:06.518658 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:03:06.519645 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:03:06.520525 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:03:06.520624 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:03:06.521499 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:03:06.521642 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:03:06.522855 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:03:06.523001 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:03:06.524100 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:03:06.524237 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:03:06.524875 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:03:06.525007 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:03:06.533337 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:03:06.533853 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:03:06.534019 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:03:06.537945 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:03:06.539225 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:03:06.539346 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:03:06.540478 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:03:06.540637 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:03:06.551508 ignition[1009]: INFO : Ignition 2.20.0 Jan 29 11:03:06.551508 ignition[1009]: INFO : Stage: umount Jan 29 11:03:06.554799 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:03:06.554799 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:03:06.554799 ignition[1009]: INFO : umount: umount passed Jan 29 11:03:06.553425 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:03:06.558958 ignition[1009]: INFO : Ignition finished successfully Jan 29 11:03:06.553532 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:03:06.556549 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:03:06.557805 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:03:06.558649 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:03:06.558696 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:03:06.559490 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:03:06.559528 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:03:06.561293 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:03:06.561332 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:03:06.562918 systemd[1]: Stopped target network.target - Network. Jan 29 11:03:06.563993 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:03:06.564040 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:03:06.568346 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:03:06.569105 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:03:06.572753 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:03:06.573355 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:03:06.574152 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:03:06.575169 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:03:06.575206 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:03:06.575980 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:03:06.576015 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:03:06.576514 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:03:06.576562 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:03:06.577157 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:03:06.577197 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:03:06.578610 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:03:06.579645 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:03:06.581398 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:03:06.581880 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:03:06.581962 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:03:06.582860 systemd-networkd[779]: eth1: DHCPv6 lease lost Jan 29 11:03:06.583380 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:03:06.583461 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:03:06.585892 systemd-networkd[779]: eth0: DHCPv6 lease lost Jan 29 11:03:06.588625 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:03:06.588920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:03:06.591770 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:03:06.591952 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:03:06.594771 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:03:06.594846 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:03:06.602897 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:03:06.603379 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:03:06.603437 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:03:06.604103 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:03:06.604141 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:03:06.604654 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:03:06.604692 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:03:06.605278 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:03:06.605317 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:03:06.607602 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:03:06.618752 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:03:06.618847 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:03:06.627007 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:03:06.627313 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:03:06.630336 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:03:06.630414 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:03:06.631930 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:03:06.631994 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:03:06.633596 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:03:06.633639 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:03:06.635032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:03:06.635082 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:03:06.636383 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:03:06.636422 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:03:06.652079 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:03:06.653229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:03:06.653329 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:03:06.658490 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:03:06.658549 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:06.660939 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:03:06.662745 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:03:06.664497 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:03:06.671008 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:03:06.679350 systemd[1]: Switching root. Jan 29 11:03:06.715254 systemd-journald[237]: Journal stopped Jan 29 11:03:07.625088 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 29 11:03:07.625168 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:03:07.625182 kernel: SELinux: policy capability open_perms=1 Jan 29 11:03:07.625192 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:03:07.625202 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:03:07.625216 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:03:07.625226 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:03:07.625239 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:03:07.625248 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:03:07.625257 kernel: audit: type=1403 audit(1738148586.892:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:03:07.625268 systemd[1]: Successfully loaded SELinux policy in 36.189ms. Jan 29 11:03:07.625292 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.638ms. Jan 29 11:03:07.625306 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:03:07.625318 systemd[1]: Detected virtualization kvm. Jan 29 11:03:07.625329 systemd[1]: Detected architecture arm64. Jan 29 11:03:07.625339 systemd[1]: Detected first boot. Jan 29 11:03:07.625349 systemd[1]: Hostname set to . Jan 29 11:03:07.625359 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:03:07.625371 zram_generator::config[1052]: No configuration found. Jan 29 11:03:07.625382 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:03:07.625396 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:03:07.625408 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:03:07.625418 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:03:07.625429 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:03:07.625439 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:03:07.625450 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:03:07.625460 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:03:07.625470 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:03:07.625480 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:03:07.625492 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:03:07.625502 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:03:07.625513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:03:07.625523 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:03:07.625533 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:03:07.625544 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:03:07.625554 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:03:07.625565 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:03:07.625575 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:03:07.625587 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:03:07.625597 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:03:07.625607 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:03:07.625617 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:03:07.625628 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:03:07.625638 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:03:07.625649 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:03:07.625661 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:03:07.625671 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:03:07.625686 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:03:07.625697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:03:07.625965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:03:07.625981 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:03:07.625992 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:03:07.626002 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:03:07.626013 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:03:07.626026 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:03:07.626036 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:03:07.626058 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:03:07.626071 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:03:07.626081 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:03:07.626096 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:03:07.626109 systemd[1]: Reached target machines.target - Containers. Jan 29 11:03:07.626120 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:03:07.626130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:07.626141 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:03:07.626152 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:03:07.626162 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:07.626174 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:03:07.626184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:07.626196 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:03:07.626206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:07.626217 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:03:07.626228 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:03:07.626238 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:03:07.626248 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:03:07.626258 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:03:07.626268 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:03:07.626278 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:03:07.626290 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:03:07.626301 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:03:07.626311 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:03:07.626322 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:03:07.626332 systemd[1]: Stopped verity-setup.service. Jan 29 11:03:07.626342 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:03:07.626352 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:03:07.626363 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:03:07.626375 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:03:07.626385 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:03:07.626395 kernel: loop: module loaded Jan 29 11:03:07.626405 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:03:07.626415 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:03:07.626427 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:03:07.626438 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:03:07.626448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:07.626458 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:07.626470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:07.626480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:07.626490 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:03:07.626500 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:07.626512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:07.626550 systemd-journald[1122]: Collecting audit messages is disabled. Jan 29 11:03:07.626573 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:03:07.626586 systemd-journald[1122]: Journal started Jan 29 11:03:07.626608 systemd-journald[1122]: Runtime Journal (/run/log/journal/c28c27ebbe5c4d6a89542e32256051bf) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:03:07.388083 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:03:07.408553 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:03:07.408976 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:03:07.629840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:03:07.633881 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:03:07.633205 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:03:07.647196 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:03:07.648387 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:03:07.649201 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:03:07.649237 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:03:07.651747 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:03:07.657007 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:03:07.658770 kernel: ACPI: bus type drm_connector registered Jan 29 11:03:07.660622 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:03:07.661341 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:07.664724 kernel: fuse: init (API version 7.39) Jan 29 11:03:07.666085 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:03:07.667947 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:03:07.669606 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:03:07.670588 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:03:07.671398 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:03:07.673247 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:03:07.676182 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:03:07.677620 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:03:07.677790 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:03:07.678645 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:03:07.678803 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:03:07.680485 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:03:07.686519 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:03:07.697008 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:03:07.721768 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:03:07.722519 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:03:07.729983 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:03:07.734914 systemd-journald[1122]: Time spent on flushing to /var/log/journal/c28c27ebbe5c4d6a89542e32256051bf is 33.687ms for 1131 entries. Jan 29 11:03:07.734914 systemd-journald[1122]: System Journal (/var/log/journal/c28c27ebbe5c4d6a89542e32256051bf) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:03:07.788667 systemd-journald[1122]: Received client request to flush runtime journal. Jan 29 11:03:07.789142 kernel: loop0: detected capacity change from 0 to 201592 Jan 29 11:03:07.736799 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:03:07.747911 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:03:07.794094 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:03:07.795800 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:03:07.796885 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:03:07.802978 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:03:07.805752 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:03:07.821993 kernel: loop1: detected capacity change from 0 to 116808 Jan 29 11:03:07.854119 kernel: loop2: detected capacity change from 0 to 8 Jan 29 11:03:07.852099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:03:07.853396 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:03:07.862920 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:03:07.876949 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:03:07.883052 kernel: loop3: detected capacity change from 0 to 113536 Jan 29 11:03:07.911010 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:03:07.923741 kernel: loop4: detected capacity change from 0 to 201592 Jan 29 11:03:07.925790 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 29 11:03:07.926365 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 29 11:03:07.940265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:03:07.950842 kernel: loop5: detected capacity change from 0 to 116808 Jan 29 11:03:07.969850 kernel: loop6: detected capacity change from 0 to 8 Jan 29 11:03:07.973196 kernel: loop7: detected capacity change from 0 to 113536 Jan 29 11:03:07.983491 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 11:03:07.984797 (sd-merge)[1190]: Merged extensions into '/usr'. Jan 29 11:03:07.993266 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:03:07.993291 systemd[1]: Reloading... Jan 29 11:03:08.113599 zram_generator::config[1219]: No configuration found. Jan 29 11:03:08.200311 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:03:08.250644 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:03:08.305745 systemd[1]: Reloading finished in 312 ms. Jan 29 11:03:08.338199 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:03:08.344679 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:03:08.352666 systemd[1]: Starting ensure-sysext.service... Jan 29 11:03:08.355006 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:03:08.358519 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:03:08.364972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:03:08.367886 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:03:08.367905 systemd[1]: Reloading... Jan 29 11:03:08.391546 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:03:08.393885 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:03:08.394584 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:03:08.394841 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 29 11:03:08.394887 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jan 29 11:03:08.399945 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:03:08.399953 systemd-tmpfiles[1256]: Skipping /boot Jan 29 11:03:08.413522 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:03:08.415890 systemd-tmpfiles[1256]: Skipping /boot Jan 29 11:03:08.422209 systemd-udevd[1258]: Using default interface naming scheme 'v255'. Jan 29 11:03:08.444762 zram_generator::config[1284]: No configuration found. Jan 29 11:03:08.636917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:03:08.688726 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1310) Jan 29 11:03:08.705725 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:03:08.725317 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:03:08.726427 systemd[1]: Reloading finished in 358 ms. Jan 29 11:03:08.742242 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:03:08.745744 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:03:08.767760 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 11:03:08.790218 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:03:08.790827 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 11:03:08.800733 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:03:08.800820 kernel: [drm] features: -context_init Jan 29 11:03:08.792610 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:03:08.793469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:08.795930 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:08.805724 kernel: [drm] number of scanouts: 1 Jan 29 11:03:08.805814 kernel: [drm] number of cap sets: 0 Jan 29 11:03:08.808947 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:08.812862 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:08.813887 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:08.818446 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:03:08.824918 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:03:08.829727 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 11:03:08.832997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:03:08.837884 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:03:08.838996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:08.840754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:08.842308 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:08.842459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:08.847720 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:03:08.861484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:03:08.861732 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:03:08.874160 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:03:08.874740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:03:08.875379 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:08.875915 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:08.883485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:08.893275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:08.897936 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:08.901045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:08.902942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:08.907020 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:08.909525 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:08.911475 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:08.915866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:08.916385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:08.918281 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:08.918406 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:08.930425 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:03:08.939649 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:03:08.945714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:08.952077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:08.954985 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:08.961080 augenrules[1411]: No rules Jan 29 11:03:08.963020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:08.963575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:08.967372 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:03:08.968195 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:03:08.970955 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:03:08.972727 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:03:08.975734 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:03:08.977123 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:03:08.978321 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:03:08.978475 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:08.979575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:08.979797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:08.980880 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:08.981001 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:08.982332 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:08.982467 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:09.002065 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:03:09.004722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:03:09.012046 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:03:09.015986 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:03:09.019527 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:03:09.022010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:03:09.022633 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:03:09.024948 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:03:09.028929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:03:09.029499 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:03:09.029873 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:03:09.033776 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:03:09.034845 systemd[1]: Finished ensure-sysext.service. Jan 29 11:03:09.035511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:03:09.035632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:03:09.058057 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:03:09.062018 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:03:09.081125 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:03:09.085236 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:03:09.086660 augenrules[1424]: /sbin/augenrules: No change Jan 29 11:03:09.088666 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:03:09.088919 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:03:09.091387 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:03:09.091525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:03:09.094020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:03:09.094127 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:03:09.105273 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:03:09.105713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:03:09.120910 augenrules[1464]: No rules Jan 29 11:03:09.124072 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:03:09.125125 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:03:09.125281 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:03:09.131512 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:03:09.141927 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:03:09.151846 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:03:09.160469 systemd-networkd[1376]: lo: Link UP Jan 29 11:03:09.160914 systemd-networkd[1376]: lo: Gained carrier Jan 29 11:03:09.168175 systemd-networkd[1376]: Enumeration completed Jan 29 11:03:09.170289 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:03:09.172316 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:09.172589 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:03:09.176149 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:09.176408 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:03:09.177110 systemd-networkd[1376]: eth0: Link UP Jan 29 11:03:09.177200 systemd-networkd[1376]: eth0: Gained carrier Jan 29 11:03:09.177260 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:09.189949 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:03:09.192500 systemd-networkd[1376]: eth1: Link UP Jan 29 11:03:09.192515 systemd-networkd[1376]: eth1: Gained carrier Jan 29 11:03:09.192550 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:03:09.193255 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:03:09.193992 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:03:09.197415 systemd-resolved[1377]: Positive Trust Anchors: Jan 29 11:03:09.197495 systemd-resolved[1377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:03:09.197527 systemd-resolved[1377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:03:09.201726 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:03:09.206187 systemd-resolved[1377]: Using system hostname 'ci-4152-2-0-3-44dff38e5d'. Jan 29 11:03:09.208651 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:03:09.211082 systemd[1]: Reached target network.target - Network. Jan 29 11:03:09.211921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:03:09.214374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:03:09.215232 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:03:09.215774 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:03:09.216310 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Jan 29 11:03:09.216526 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:03:09.217241 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:03:09.218179 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:03:09.218904 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:03:09.219544 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:03:09.220233 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:03:09.220266 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:03:09.220759 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:03:09.222265 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:03:09.224512 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:03:09.231323 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:03:09.233215 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:03:09.234115 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:03:09.234737 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:03:09.235476 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:03:09.235511 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:03:09.236698 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:03:09.239902 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:03:09.245023 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:03:09.248802 systemd-networkd[1376]: eth0: DHCPv4 address 168.119.110.78/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:03:09.249172 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:03:09.251359 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Jan 29 11:03:09.253969 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:03:09.254502 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:03:09.256936 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:03:09.261005 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:03:09.262852 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 11:03:09.267915 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:03:09.273909 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:03:09.277597 jq[1484]: false Jan 29 11:03:09.278889 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:03:09.280374 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:03:09.282042 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:03:09.284888 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:03:09.288611 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:03:09.291551 dbus-daemon[1483]: [system] SELinux support is enabled Jan 29 11:03:09.296277 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:03:09.297481 jq[1497]: true Jan 29 11:03:09.301912 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:03:09.302115 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:03:09.318594 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:03:09.318660 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:03:09.320390 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:03:09.320412 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:03:09.322964 extend-filesystems[1485]: Found loop4 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found loop5 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found loop6 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found loop7 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found sda Jan 29 11:03:09.322964 extend-filesystems[1485]: Found sda1 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found sda2 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found sda3 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found usr Jan 29 11:03:09.322964 extend-filesystems[1485]: Found sda4 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found sda6 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found sda7 Jan 29 11:03:09.322964 extend-filesystems[1485]: Found sda9 Jan 29 11:03:09.322964 extend-filesystems[1485]: Checking size of /dev/sda9 Jan 29 11:03:09.334676 jq[1502]: true Jan 29 11:03:09.344067 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:03:09.345771 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:03:09.371579 tar[1500]: linux-arm64/LICENSE Jan 29 11:03:09.371579 tar[1500]: linux-arm64/helm Jan 29 11:03:09.378942 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:03:09.379127 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:03:09.380167 (ntainerd)[1512]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:03:09.383016 update_engine[1494]: I20250129 11:03:09.382797 1494 main.cc:92] Flatcar Update Engine starting Jan 29 11:03:09.388226 extend-filesystems[1485]: Resized partition /dev/sda9 Jan 29 11:03:09.391092 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:03:09.400657 update_engine[1494]: I20250129 11:03:09.395465 1494 update_check_scheduler.cc:74] Next update check in 10m15s Jan 29 11:03:09.400802 extend-filesystems[1529]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:03:09.476781 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 11:03:09.410968 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:03:09.476953 coreos-metadata[1482]: Jan 29 11:03:09.413 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 11:03:09.476953 coreos-metadata[1482]: Jan 29 11:03:09.415 INFO Fetch successful Jan 29 11:03:09.476953 coreos-metadata[1482]: Jan 29 11:03:09.415 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 11:03:09.476953 coreos-metadata[1482]: Jan 29 11:03:09.418 INFO Fetch successful Jan 29 11:03:09.448616 systemd-logind[1493]: New seat seat0. Jan 29 11:03:09.479106 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:03:09.479126 systemd-logind[1493]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 11:03:09.481127 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:03:09.497016 bash[1543]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:03:09.498799 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:03:09.509955 systemd[1]: Starting sshkeys.service... Jan 29 11:03:09.527741 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 11:03:09.548745 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:03:09.566901 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1297) Jan 29 11:03:09.552576 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:03:09.570375 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:03:09.572766 extend-filesystems[1529]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 11:03:09.572766 extend-filesystems[1529]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 11:03:09.572766 extend-filesystems[1529]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 11:03:09.576311 extend-filesystems[1485]: Resized filesystem in /dev/sda9 Jan 29 11:03:09.576311 extend-filesystems[1485]: Found sr0 Jan 29 11:03:09.580067 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:03:09.580263 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:03:09.593259 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:03:09.638755 coreos-metadata[1557]: Jan 29 11:03:09.638 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 11:03:09.641103 coreos-metadata[1557]: Jan 29 11:03:09.641 INFO Fetch successful Jan 29 11:03:09.642440 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:03:09.644406 unknown[1557]: wrote ssh authorized keys file for user: core Jan 29 11:03:09.672177 update-ssh-keys[1569]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:03:09.673203 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:03:09.681862 systemd[1]: Finished sshkeys.service. Jan 29 11:03:09.719161 containerd[1512]: time="2025-01-29T11:03:09.718898160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:03:09.762761 containerd[1512]: time="2025-01-29T11:03:09.761747560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:09.763437 containerd[1512]: time="2025-01-29T11:03:09.763395240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:09.763437 containerd[1512]: time="2025-01-29T11:03:09.763433160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:03:09.763500 containerd[1512]: time="2025-01-29T11:03:09.763451280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763601720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763625240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763683640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763694760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763860080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763875720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763887520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763896040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.763961360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.764209120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:03:09.764888 containerd[1512]: time="2025-01-29T11:03:09.764303200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:03:09.765116 containerd[1512]: time="2025-01-29T11:03:09.764316120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:03:09.765116 containerd[1512]: time="2025-01-29T11:03:09.764385640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:03:09.765116 containerd[1512]: time="2025-01-29T11:03:09.764427560Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:03:09.769813 containerd[1512]: time="2025-01-29T11:03:09.769782400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:03:09.769860 containerd[1512]: time="2025-01-29T11:03:09.769843360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:03:09.769885 containerd[1512]: time="2025-01-29T11:03:09.769859680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:03:09.769885 containerd[1512]: time="2025-01-29T11:03:09.769876760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:03:09.770550 containerd[1512]: time="2025-01-29T11:03:09.770528480Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:03:09.770729 containerd[1512]: time="2025-01-29T11:03:09.770690880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:03:09.771272 containerd[1512]: time="2025-01-29T11:03:09.771245760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:03:09.771387 containerd[1512]: time="2025-01-29T11:03:09.771368840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:03:09.771417 containerd[1512]: time="2025-01-29T11:03:09.771390000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:03:09.771417 containerd[1512]: time="2025-01-29T11:03:09.771405360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:03:09.771452 containerd[1512]: time="2025-01-29T11:03:09.771419320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:03:09.771452 containerd[1512]: time="2025-01-29T11:03:09.771433360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:03:09.771484 containerd[1512]: time="2025-01-29T11:03:09.771453360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:03:09.771484 containerd[1512]: time="2025-01-29T11:03:09.771470560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:03:09.771520 containerd[1512]: time="2025-01-29T11:03:09.771486280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:03:09.771520 containerd[1512]: time="2025-01-29T11:03:09.771498760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:03:09.771520 containerd[1512]: time="2025-01-29T11:03:09.771510640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:03:09.771567 containerd[1512]: time="2025-01-29T11:03:09.771521440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:03:09.771567 containerd[1512]: time="2025-01-29T11:03:09.771541880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771567 containerd[1512]: time="2025-01-29T11:03:09.771556320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771621 containerd[1512]: time="2025-01-29T11:03:09.771568240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771621 containerd[1512]: time="2025-01-29T11:03:09.771581280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771621 containerd[1512]: time="2025-01-29T11:03:09.771592720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771621 containerd[1512]: time="2025-01-29T11:03:09.771605680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771621 containerd[1512]: time="2025-01-29T11:03:09.771616560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771729 containerd[1512]: time="2025-01-29T11:03:09.771628960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771729 containerd[1512]: time="2025-01-29T11:03:09.771642560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771729 containerd[1512]: time="2025-01-29T11:03:09.771663120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771729 containerd[1512]: time="2025-01-29T11:03:09.771674680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.771729 containerd[1512]: time="2025-01-29T11:03:09.771686640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.772502 containerd[1512]: time="2025-01-29T11:03:09.772475000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.772533 containerd[1512]: time="2025-01-29T11:03:09.772511280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:03:09.772552 containerd[1512]: time="2025-01-29T11:03:09.772538200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.772570 containerd[1512]: time="2025-01-29T11:03:09.772560840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.772588 containerd[1512]: time="2025-01-29T11:03:09.772572200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:03:09.773201 containerd[1512]: time="2025-01-29T11:03:09.773178200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:03:09.773327 containerd[1512]: time="2025-01-29T11:03:09.773211680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:03:09.773360 containerd[1512]: time="2025-01-29T11:03:09.773330640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:03:09.773360 containerd[1512]: time="2025-01-29T11:03:09.773347800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:03:09.773360 containerd[1512]: time="2025-01-29T11:03:09.773357520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.773410 containerd[1512]: time="2025-01-29T11:03:09.773371040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:03:09.773410 containerd[1512]: time="2025-01-29T11:03:09.773381600Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:03:09.773410 containerd[1512]: time="2025-01-29T11:03:09.773391440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:03:09.774656 containerd[1512]: time="2025-01-29T11:03:09.774598800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:03:09.774776 containerd[1512]: time="2025-01-29T11:03:09.774690800Z" level=info msg="Connect containerd service" Jan 29 11:03:09.774776 containerd[1512]: time="2025-01-29T11:03:09.774739320Z" level=info msg="using legacy CRI server" Jan 29 11:03:09.774776 containerd[1512]: time="2025-01-29T11:03:09.774748800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:03:09.776708 containerd[1512]: time="2025-01-29T11:03:09.774980840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:03:09.776955 containerd[1512]: time="2025-01-29T11:03:09.776914040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:03:09.777486 containerd[1512]: time="2025-01-29T11:03:09.777448920Z" level=info msg="Start subscribing containerd event" Jan 29 11:03:09.777520 containerd[1512]: time="2025-01-29T11:03:09.777503560Z" level=info msg="Start recovering state" Jan 29 11:03:09.777581 containerd[1512]: time="2025-01-29T11:03:09.777566560Z" level=info msg="Start event monitor" Jan 29 11:03:09.777606 containerd[1512]: time="2025-01-29T11:03:09.777581760Z" level=info msg="Start snapshots syncer" Jan 29 11:03:09.777606 containerd[1512]: time="2025-01-29T11:03:09.777591280Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:03:09.777606 containerd[1512]: time="2025-01-29T11:03:09.777598880Z" level=info msg="Start streaming server" Jan 29 11:03:09.779604 containerd[1512]: time="2025-01-29T11:03:09.779576720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:03:09.779642 containerd[1512]: time="2025-01-29T11:03:09.779627880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:03:09.780063 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:03:09.781151 containerd[1512]: time="2025-01-29T11:03:09.781124360Z" level=info msg="containerd successfully booted in 0.063448s" Jan 29 11:03:10.088298 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:03:10.095132 tar[1500]: linux-arm64/README.md Jan 29 11:03:10.107966 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:03:10.109234 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:03:10.117413 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:03:10.123643 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:03:10.123934 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:03:10.132220 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:03:10.142770 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:03:10.151597 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:03:10.155746 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:03:10.156639 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:03:10.504966 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 29 11:03:10.506515 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Jan 29 11:03:10.508599 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:03:10.510451 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:03:10.516976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:10.520964 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:03:10.548594 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:03:10.632987 systemd-networkd[1376]: eth1: Gained IPv6LL Jan 29 11:03:10.633480 systemd-timesyncd[1447]: Network configuration changed, trying to establish connection. Jan 29 11:03:11.256721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:11.257866 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:03:11.262250 systemd[1]: Startup finished in 743ms (kernel) + 6.200s (initrd) + 4.405s (userspace) = 11.349s. Jan 29 11:03:11.264183 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:11.750643 kubelet[1612]: E0129 11:03:11.750472 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:11.752398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:11.752694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:22.003813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:03:22.010031 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:22.138007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:22.141956 (kubelet)[1631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:22.180139 kubelet[1631]: E0129 11:03:22.180042 1631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:22.184876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:22.185114 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:32.435910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:03:32.446081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:32.544084 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:32.555247 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:32.602073 kubelet[1645]: E0129 11:03:32.602028 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:32.604134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:32.604261 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:41.170000 systemd-timesyncd[1447]: Contacted time server 116.203.244.102:123 (2.flatcar.pool.ntp.org). Jan 29 11:03:41.170078 systemd-timesyncd[1447]: Initial clock synchronization to Wed 2025-01-29 11:03:41.047242 UTC. Jan 29 11:03:42.855408 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:03:42.861956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:42.969593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:42.982244 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:43.026860 kubelet[1661]: E0129 11:03:43.026801 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:43.028808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:43.028941 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:53.146689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:03:53.154040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:03:53.251989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:03:53.252082 (kubelet)[1676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:03:53.291324 kubelet[1676]: E0129 11:03:53.291239 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:03:53.293334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:03:53.293639 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:03:54.958246 update_engine[1494]: I20250129 11:03:54.958107 1494 update_attempter.cc:509] Updating boot flags... Jan 29 11:03:55.004740 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1692) Jan 29 11:03:55.062183 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1692) Jan 29 11:03:55.112784 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1692) Jan 29 11:04:03.396772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 11:04:03.408090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:03.517495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:03.525140 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:03.568676 kubelet[1712]: E0129 11:04:03.568494 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:03.571384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:03.571638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:13.646812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 11:04:13.655042 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:13.764174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:13.768547 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:13.809364 kubelet[1727]: E0129 11:04:13.809267 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:13.812214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:13.812361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:23.896626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 11:04:23.904007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:24.011901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:24.016328 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:24.056663 kubelet[1742]: E0129 11:04:24.056615 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:24.059160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:24.059332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:34.146578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 11:04:34.156978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:34.264985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:34.266816 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:34.309726 kubelet[1757]: E0129 11:04:34.308663 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:34.313603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:34.314160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:44.396839 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 11:04:44.408119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:44.505590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:44.510095 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:44.556922 kubelet[1771]: E0129 11:04:44.556869 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:44.560222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:44.560547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:04:54.646675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 11:04:54.653014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:04:54.792938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:04:54.793811 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:04:54.835055 kubelet[1787]: E0129 11:04:54.834994 1787 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:04:54.837429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:04:54.837842 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:05:04.896435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 11:05:04.903059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:05.008693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:05.014620 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:05:05.051464 kubelet[1801]: E0129 11:05:05.051393 1801 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:05:05.053980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:05:05.054263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:05:07.383985 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:05:07.391373 systemd[1]: Started sshd@0-168.119.110.78:22-147.75.109.163:55260.service - OpenSSH per-connection server daemon (147.75.109.163:55260). Jan 29 11:05:08.388895 sshd[1810]: Accepted publickey for core from 147.75.109.163 port 55260 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:08.391648 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:08.404294 systemd-logind[1493]: New session 1 of user core. Jan 29 11:05:08.406539 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:05:08.417153 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:05:08.430182 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:05:08.437014 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:05:08.440436 (systemd)[1814]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:05:08.540342 systemd[1814]: Queued start job for default target default.target. Jan 29 11:05:08.550541 systemd[1814]: Created slice app.slice - User Application Slice. Jan 29 11:05:08.550594 systemd[1814]: Reached target paths.target - Paths. Jan 29 11:05:08.550619 systemd[1814]: Reached target timers.target - Timers. Jan 29 11:05:08.552689 systemd[1814]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:05:08.566833 systemd[1814]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:05:08.566948 systemd[1814]: Reached target sockets.target - Sockets. Jan 29 11:05:08.566963 systemd[1814]: Reached target basic.target - Basic System. Jan 29 11:05:08.567006 systemd[1814]: Reached target default.target - Main User Target. Jan 29 11:05:08.567033 systemd[1814]: Startup finished in 120ms. Jan 29 11:05:08.567169 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:05:08.578008 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:05:09.294169 systemd[1]: Started sshd@1-168.119.110.78:22-147.75.109.163:43438.service - OpenSSH per-connection server daemon (147.75.109.163:43438). Jan 29 11:05:10.292100 sshd[1825]: Accepted publickey for core from 147.75.109.163 port 43438 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:10.294142 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:10.299676 systemd-logind[1493]: New session 2 of user core. Jan 29 11:05:10.304933 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:05:10.980344 sshd[1827]: Connection closed by 147.75.109.163 port 43438 Jan 29 11:05:10.981353 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:10.985505 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:05:10.986042 systemd[1]: sshd@1-168.119.110.78:22-147.75.109.163:43438.service: Deactivated successfully. Jan 29 11:05:10.988269 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:05:10.989843 systemd-logind[1493]: Removed session 2. Jan 29 11:05:11.155026 systemd[1]: Started sshd@2-168.119.110.78:22-147.75.109.163:43452.service - OpenSSH per-connection server daemon (147.75.109.163:43452). Jan 29 11:05:12.154007 sshd[1832]: Accepted publickey for core from 147.75.109.163 port 43452 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:12.156397 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:12.161619 systemd-logind[1493]: New session 3 of user core. Jan 29 11:05:12.173994 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:05:12.838185 sshd[1834]: Connection closed by 147.75.109.163 port 43452 Jan 29 11:05:12.839255 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:12.844963 systemd[1]: sshd@2-168.119.110.78:22-147.75.109.163:43452.service: Deactivated successfully. Jan 29 11:05:12.847548 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:05:12.848525 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:05:12.850093 systemd-logind[1493]: Removed session 3. Jan 29 11:05:13.019224 systemd[1]: Started sshd@3-168.119.110.78:22-147.75.109.163:43468.service - OpenSSH per-connection server daemon (147.75.109.163:43468). Jan 29 11:05:14.009115 sshd[1839]: Accepted publickey for core from 147.75.109.163 port 43468 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:14.011335 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:14.016972 systemd-logind[1493]: New session 4 of user core. Jan 29 11:05:14.027054 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:05:14.691448 sshd[1841]: Connection closed by 147.75.109.163 port 43468 Jan 29 11:05:14.692610 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:14.698688 systemd[1]: sshd@3-168.119.110.78:22-147.75.109.163:43468.service: Deactivated successfully. Jan 29 11:05:14.702045 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:05:14.703108 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:05:14.705015 systemd-logind[1493]: Removed session 4. Jan 29 11:05:14.862145 systemd[1]: Started sshd@4-168.119.110.78:22-147.75.109.163:43480.service - OpenSSH per-connection server daemon (147.75.109.163:43480). Jan 29 11:05:15.146689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 11:05:15.154037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:15.263932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:15.269107 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:05:15.319244 kubelet[1856]: E0129 11:05:15.319141 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:05:15.323252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:05:15.323406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:05:15.840043 sshd[1846]: Accepted publickey for core from 147.75.109.163 port 43480 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:15.842363 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:15.848319 systemd-logind[1493]: New session 5 of user core. Jan 29 11:05:15.862950 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:05:16.369720 sudo[1863]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:05:16.370003 sudo[1863]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:05:16.395014 sudo[1863]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:16.553766 sshd[1862]: Connection closed by 147.75.109.163 port 43480 Jan 29 11:05:16.555026 sshd-session[1846]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:16.558107 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:05:16.559418 systemd[1]: sshd@4-168.119.110.78:22-147.75.109.163:43480.service: Deactivated successfully. Jan 29 11:05:16.563685 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:05:16.564744 systemd-logind[1493]: Removed session 5. Jan 29 11:05:16.741923 systemd[1]: Started sshd@5-168.119.110.78:22-147.75.109.163:43488.service - OpenSSH per-connection server daemon (147.75.109.163:43488). Jan 29 11:05:17.720791 sshd[1868]: Accepted publickey for core from 147.75.109.163 port 43488 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:17.722727 sshd-session[1868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:17.727482 systemd-logind[1493]: New session 6 of user core. Jan 29 11:05:17.735069 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:05:18.240634 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:05:18.240956 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:05:18.244803 sudo[1872]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:18.251111 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:05:18.251373 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:05:18.271431 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:05:18.304888 augenrules[1894]: No rules Jan 29 11:05:18.306373 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:05:18.306606 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:05:18.309023 sudo[1871]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:18.466828 sshd[1870]: Connection closed by 147.75.109.163 port 43488 Jan 29 11:05:18.467671 sshd-session[1868]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:18.471852 systemd[1]: sshd@5-168.119.110.78:22-147.75.109.163:43488.service: Deactivated successfully. Jan 29 11:05:18.475194 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:05:18.477663 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:05:18.478916 systemd-logind[1493]: Removed session 6. Jan 29 11:05:18.634680 systemd[1]: Started sshd@6-168.119.110.78:22-147.75.109.163:41882.service - OpenSSH per-connection server daemon (147.75.109.163:41882). Jan 29 11:05:19.627524 sshd[1902]: Accepted publickey for core from 147.75.109.163 port 41882 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:05:19.629800 sshd-session[1902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:05:19.635346 systemd-logind[1493]: New session 7 of user core. Jan 29 11:05:19.644121 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:05:20.148042 sudo[1905]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:05:20.148309 sudo[1905]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:05:20.443057 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:05:20.445783 (dockerd)[1923]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:05:20.668177 dockerd[1923]: time="2025-01-29T11:05:20.667781462Z" level=info msg="Starting up" Jan 29 11:05:20.768063 dockerd[1923]: time="2025-01-29T11:05:20.767158686Z" level=info msg="Loading containers: start." Jan 29 11:05:20.940846 kernel: Initializing XFRM netlink socket Jan 29 11:05:21.020180 systemd-networkd[1376]: docker0: Link UP Jan 29 11:05:21.048339 dockerd[1923]: time="2025-01-29T11:05:21.048175802Z" level=info msg="Loading containers: done." Jan 29 11:05:21.066434 dockerd[1923]: time="2025-01-29T11:05:21.066327241Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:05:21.066690 dockerd[1923]: time="2025-01-29T11:05:21.066494161Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:05:21.066690 dockerd[1923]: time="2025-01-29T11:05:21.066664240Z" level=info msg="Daemon has completed initialization" Jan 29 11:05:21.106099 dockerd[1923]: time="2025-01-29T11:05:21.105821307Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:05:21.106230 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:05:21.836662 containerd[1512]: time="2025-01-29T11:05:21.836604593Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 11:05:22.391074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592285922.mount: Deactivated successfully. Jan 29 11:05:23.425211 containerd[1512]: time="2025-01-29T11:05:23.425147791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:23.427667 containerd[1512]: time="2025-01-29T11:05:23.427624581Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26221040" Jan 29 11:05:23.428895 containerd[1512]: time="2025-01-29T11:05:23.428863496Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:23.432878 containerd[1512]: time="2025-01-29T11:05:23.432829400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:23.434219 containerd[1512]: time="2025-01-29T11:05:23.434189714Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 1.597543041s" Jan 29 11:05:23.434315 containerd[1512]: time="2025-01-29T11:05:23.434300474Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 11:05:23.435178 containerd[1512]: time="2025-01-29T11:05:23.435138311Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 11:05:24.554147 containerd[1512]: time="2025-01-29T11:05:24.554084137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:24.555740 containerd[1512]: time="2025-01-29T11:05:24.555672451Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527127" Jan 29 11:05:24.556485 containerd[1512]: time="2025-01-29T11:05:24.556389968Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:24.561641 containerd[1512]: time="2025-01-29T11:05:24.561579508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:24.563129 containerd[1512]: time="2025-01-29T11:05:24.563011942Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.127643952s" Jan 29 11:05:24.563129 containerd[1512]: time="2025-01-29T11:05:24.563046182Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 11:05:24.563742 containerd[1512]: time="2025-01-29T11:05:24.563561500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 11:05:25.396271 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 11:05:25.403061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:25.509905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:25.514517 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:05:25.568680 kubelet[2177]: E0129 11:05:25.568590 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:05:25.572992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:05:25.573160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:05:25.886438 containerd[1512]: time="2025-01-29T11:05:25.886224386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:25.888013 containerd[1512]: time="2025-01-29T11:05:25.887948420Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481133" Jan 29 11:05:25.890127 containerd[1512]: time="2025-01-29T11:05:25.890040412Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:25.894966 containerd[1512]: time="2025-01-29T11:05:25.894892553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:25.896532 containerd[1512]: time="2025-01-29T11:05:25.896182909Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.332590449s" Jan 29 11:05:25.896532 containerd[1512]: time="2025-01-29T11:05:25.896219068Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 11:05:25.896987 containerd[1512]: time="2025-01-29T11:05:25.896953706Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:05:26.876889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024518857.mount: Deactivated successfully. Jan 29 11:05:27.220938 containerd[1512]: time="2025-01-29T11:05:27.220881583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:27.223215 containerd[1512]: time="2025-01-29T11:05:27.222229218Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364423" Jan 29 11:05:27.225729 containerd[1512]: time="2025-01-29T11:05:27.224780449Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:27.229299 containerd[1512]: time="2025-01-29T11:05:27.229262433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:27.231140 containerd[1512]: time="2025-01-29T11:05:27.231112067Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.333985202s" Jan 29 11:05:27.231248 containerd[1512]: time="2025-01-29T11:05:27.231231906Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 11:05:27.231806 containerd[1512]: time="2025-01-29T11:05:27.231770304Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 11:05:27.827225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804834392.mount: Deactivated successfully. Jan 29 11:05:28.608734 containerd[1512]: time="2025-01-29T11:05:28.608615061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:28.610193 containerd[1512]: time="2025-01-29T11:05:28.610135056Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jan 29 11:05:28.611245 containerd[1512]: time="2025-01-29T11:05:28.611176733Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:28.615145 containerd[1512]: time="2025-01-29T11:05:28.615039800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:28.618434 containerd[1512]: time="2025-01-29T11:05:28.618037750Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.386043206s" Jan 29 11:05:28.618434 containerd[1512]: time="2025-01-29T11:05:28.618103349Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 11:05:28.619214 containerd[1512]: time="2025-01-29T11:05:28.619176226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:05:29.048512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807829413.mount: Deactivated successfully. Jan 29 11:05:29.056430 containerd[1512]: time="2025-01-29T11:05:29.056357166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:29.058856 containerd[1512]: time="2025-01-29T11:05:29.058067521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 29 11:05:29.058856 containerd[1512]: time="2025-01-29T11:05:29.058766279Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:29.062369 containerd[1512]: time="2025-01-29T11:05:29.062314427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:29.063609 containerd[1512]: time="2025-01-29T11:05:29.063562183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 444.335437ms" Jan 29 11:05:29.063609 containerd[1512]: time="2025-01-29T11:05:29.063603743Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:05:29.064182 containerd[1512]: time="2025-01-29T11:05:29.064136341Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 11:05:29.179154 systemd[1]: Started sshd@7-168.119.110.78:22-92.255.85.188:54158.service - OpenSSH per-connection server daemon (92.255.85.188:54158). Jan 29 11:05:29.592048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383238770.mount: Deactivated successfully. Jan 29 11:05:30.206607 sshd[2253]: Invalid user ubnt from 92.255.85.188 port 54158 Jan 29 11:05:30.263028 sshd[2253]: Connection closed by invalid user ubnt 92.255.85.188 port 54158 [preauth] Jan 29 11:05:30.265141 systemd[1]: sshd@7-168.119.110.78:22-92.255.85.188:54158.service: Deactivated successfully. Jan 29 11:05:30.949406 containerd[1512]: time="2025-01-29T11:05:30.949347718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:30.950628 containerd[1512]: time="2025-01-29T11:05:30.950532274Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812491" Jan 29 11:05:30.952720 containerd[1512]: time="2025-01-29T11:05:30.951684631Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:30.955586 containerd[1512]: time="2025-01-29T11:05:30.955533779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:05:30.957875 containerd[1512]: time="2025-01-29T11:05:30.957839812Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.893565871s" Jan 29 11:05:30.957950 containerd[1512]: time="2025-01-29T11:05:30.957877932Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 11:05:35.619224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 29 11:05:35.626326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:35.641662 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:05:35.641761 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:05:35.642225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:35.651318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:35.686719 systemd[1]: Reloading requested from client PID 2343 ('systemctl') (unit session-7.scope)... Jan 29 11:05:35.686734 systemd[1]: Reloading... Jan 29 11:05:35.805737 zram_generator::config[2379]: No configuration found. Jan 29 11:05:35.924854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:05:36.003307 systemd[1]: Reloading finished in 316 ms. Jan 29 11:05:36.077048 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:05:36.077214 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:05:36.077634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:36.089628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:36.221949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:36.222804 (kubelet)[2430]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:05:36.264334 kubelet[2430]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:36.264334 kubelet[2430]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:05:36.264334 kubelet[2430]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:36.264743 kubelet[2430]: I0129 11:05:36.264440 2430 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:05:37.489638 kubelet[2430]: I0129 11:05:37.489366 2430 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:05:37.489638 kubelet[2430]: I0129 11:05:37.489404 2430 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:05:37.490467 kubelet[2430]: I0129 11:05:37.489766 2430 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:05:37.515619 kubelet[2430]: E0129 11:05:37.515571 2430 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://168.119.110.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:05:37.517010 kubelet[2430]: I0129 11:05:37.516830 2430 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:05:37.531492 kubelet[2430]: E0129 11:05:37.531439 2430 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:05:37.531492 kubelet[2430]: I0129 11:05:37.531477 2430 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:05:37.534181 kubelet[2430]: I0129 11:05:37.534157 2430 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:05:37.535043 kubelet[2430]: I0129 11:05:37.535001 2430 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:05:37.535219 kubelet[2430]: I0129 11:05:37.535048 2430 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-3-44dff38e5d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:05:37.535343 kubelet[2430]: I0129 11:05:37.535329 2430 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:05:37.535343 kubelet[2430]: I0129 11:05:37.535344 2430 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:05:37.535566 kubelet[2430]: I0129 11:05:37.535538 2430 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:37.538719 kubelet[2430]: I0129 11:05:37.538672 2430 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:05:37.538719 kubelet[2430]: I0129 11:05:37.538722 2430 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:05:37.538800 kubelet[2430]: I0129 11:05:37.538742 2430 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:05:37.538800 kubelet[2430]: I0129 11:05:37.538752 2430 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:05:37.542014 kubelet[2430]: W0129 11:05:37.540857 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.110.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-3-44dff38e5d&limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 11:05:37.542014 kubelet[2430]: E0129 11:05:37.540967 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://168.119.110.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-3-44dff38e5d&limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:05:37.542014 kubelet[2430]: W0129 11:05:37.541882 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://168.119.110.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 11:05:37.542014 kubelet[2430]: E0129 11:05:37.541906 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://168.119.110.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:05:37.543735 kubelet[2430]: I0129 11:05:37.542525 2430 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:05:37.543735 kubelet[2430]: I0129 11:05:37.543118 2430 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:05:37.543735 kubelet[2430]: W0129 11:05:37.543252 2430 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:05:37.545362 kubelet[2430]: I0129 11:05:37.545337 2430 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:05:37.545466 kubelet[2430]: I0129 11:05:37.545457 2430 server.go:1287] "Started kubelet" Jan 29 11:05:37.551664 kubelet[2430]: E0129 11:05:37.551362 2430 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.110.78:6443/api/v1/namespaces/default/events\": dial tcp 168.119.110.78:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-3-44dff38e5d.181f25115430848b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-3-44dff38e5d,UID:ci-4152-2-0-3-44dff38e5d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-3-44dff38e5d,},FirstTimestamp:2025-01-29 11:05:37.545438347 +0000 UTC m=+1.316395876,LastTimestamp:2025-01-29 11:05:37.545438347 +0000 UTC m=+1.316395876,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-3-44dff38e5d,}" Jan 29 11:05:37.552356 kubelet[2430]: I0129 11:05:37.552337 2430 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:05:37.554551 kubelet[2430]: I0129 11:05:37.554500 2430 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:05:37.555474 kubelet[2430]: I0129 11:05:37.555438 2430 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:05:37.556482 kubelet[2430]: I0129 11:05:37.556424 2430 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:05:37.556723 kubelet[2430]: I0129 11:05:37.556685 2430 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:05:37.556948 kubelet[2430]: I0129 11:05:37.556922 2430 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:05:37.557447 kubelet[2430]: E0129 11:05:37.557411 2430 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" Jan 29 11:05:37.557502 kubelet[2430]: I0129 11:05:37.557456 2430 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:05:37.557698 kubelet[2430]: I0129 11:05:37.557672 2430 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:05:37.557698 kubelet[2430]: I0129 11:05:37.557766 2430 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:05:37.558313 kubelet[2430]: W0129 11:05:37.558244 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.110.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 11:05:37.558367 kubelet[2430]: E0129 11:05:37.558314 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://168.119.110.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:05:37.559090 kubelet[2430]: E0129 11:05:37.558675 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-3-44dff38e5d?timeout=10s\": dial tcp 168.119.110.78:6443: connect: connection refused" interval="200ms" Jan 29 11:05:37.559505 kubelet[2430]: I0129 11:05:37.559484 2430 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:05:37.559659 kubelet[2430]: I0129 11:05:37.559642 2430 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:05:37.560200 kubelet[2430]: E0129 11:05:37.560176 2430 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:05:37.561281 kubelet[2430]: I0129 11:05:37.561223 2430 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:05:37.573368 kubelet[2430]: I0129 11:05:37.573300 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:05:37.574474 kubelet[2430]: I0129 11:05:37.574429 2430 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:05:37.574474 kubelet[2430]: I0129 11:05:37.574463 2430 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:05:37.574575 kubelet[2430]: I0129 11:05:37.574488 2430 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:05:37.574575 kubelet[2430]: I0129 11:05:37.574496 2430 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:05:37.574575 kubelet[2430]: E0129 11:05:37.574560 2430 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:05:37.584743 kubelet[2430]: W0129 11:05:37.584579 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://168.119.110.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 11:05:37.584743 kubelet[2430]: E0129 11:05:37.584642 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://168.119.110.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:05:37.585686 kubelet[2430]: I0129 11:05:37.585637 2430 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:05:37.585686 kubelet[2430]: I0129 11:05:37.585652 2430 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:05:37.585826 kubelet[2430]: I0129 11:05:37.585690 2430 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:37.587752 kubelet[2430]: I0129 11:05:37.587721 2430 policy_none.go:49] "None policy: Start" Jan 29 11:05:37.587752 kubelet[2430]: I0129 11:05:37.587745 2430 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:05:37.587752 kubelet[2430]: I0129 11:05:37.587756 2430 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:05:37.595366 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:05:37.612911 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:05:37.617006 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:05:37.627358 kubelet[2430]: I0129 11:05:37.627314 2430 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:05:37.627910 kubelet[2430]: I0129 11:05:37.627880 2430 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:05:37.628086 kubelet[2430]: I0129 11:05:37.628029 2430 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:05:37.629925 kubelet[2430]: I0129 11:05:37.629735 2430 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:05:37.631736 kubelet[2430]: E0129 11:05:37.631687 2430 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:05:37.632031 kubelet[2430]: E0129 11:05:37.631941 2430 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-3-44dff38e5d\" not found" Jan 29 11:05:37.690051 systemd[1]: Created slice kubepods-burstable-pod64ce047490cd19ff34bcd41466b5a7a4.slice - libcontainer container kubepods-burstable-pod64ce047490cd19ff34bcd41466b5a7a4.slice. Jan 29 11:05:37.699340 kubelet[2430]: E0129 11:05:37.699259 2430 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.704296 systemd[1]: Created slice kubepods-burstable-pod7baf7b995cfa6fa459e39464863f65c4.slice - libcontainer container kubepods-burstable-pod7baf7b995cfa6fa459e39464863f65c4.slice. Jan 29 11:05:37.706309 kubelet[2430]: E0129 11:05:37.706261 2430 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.709082 systemd[1]: Created slice kubepods-burstable-pode2590c740afd8b27761d8720c559dce5.slice - libcontainer container kubepods-burstable-pode2590c740afd8b27761d8720c559dce5.slice. Jan 29 11:05:37.710799 kubelet[2430]: E0129 11:05:37.710768 2430 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.731544 kubelet[2430]: I0129 11:05:37.731491 2430 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.732224 kubelet[2430]: E0129 11:05:37.732187 2430 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.110.78:6443/api/v1/nodes\": dial tcp 168.119.110.78:6443: connect: connection refused" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.759951 kubelet[2430]: I0129 11:05:37.758862 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.759951 kubelet[2430]: I0129 11:05:37.758918 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.759951 kubelet[2430]: I0129 11:05:37.758951 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.759951 kubelet[2430]: I0129 11:05:37.758982 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64ce047490cd19ff34bcd41466b5a7a4-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-3-44dff38e5d\" (UID: \"64ce047490cd19ff34bcd41466b5a7a4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.759951 kubelet[2430]: I0129 11:05:37.759027 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64ce047490cd19ff34bcd41466b5a7a4-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-3-44dff38e5d\" (UID: \"64ce047490cd19ff34bcd41466b5a7a4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.760303 kubelet[2430]: I0129 11:05:37.759055 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64ce047490cd19ff34bcd41466b5a7a4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-3-44dff38e5d\" (UID: \"64ce047490cd19ff34bcd41466b5a7a4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.760303 kubelet[2430]: I0129 11:05:37.759093 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.760303 kubelet[2430]: I0129 11:05:37.759121 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.760303 kubelet[2430]: I0129 11:05:37.759185 2430 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2590c740afd8b27761d8720c559dce5-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-3-44dff38e5d\" (UID: \"e2590c740afd8b27761d8720c559dce5\") " pod="kube-system/kube-scheduler-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.760303 kubelet[2430]: E0129 11:05:37.759372 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-3-44dff38e5d?timeout=10s\": dial tcp 168.119.110.78:6443: connect: connection refused" interval="400ms" Jan 29 11:05:37.935459 kubelet[2430]: I0129 11:05:37.935405 2430 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:37.935947 kubelet[2430]: E0129 11:05:37.935905 2430 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.110.78:6443/api/v1/nodes\": dial tcp 168.119.110.78:6443: connect: connection refused" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:38.001959 containerd[1512]: time="2025-01-29T11:05:38.001826665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-3-44dff38e5d,Uid:64ce047490cd19ff34bcd41466b5a7a4,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:38.007753 containerd[1512]: time="2025-01-29T11:05:38.007385613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-3-44dff38e5d,Uid:7baf7b995cfa6fa459e39464863f65c4,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:38.012473 containerd[1512]: time="2025-01-29T11:05:38.012383122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-3-44dff38e5d,Uid:e2590c740afd8b27761d8720c559dce5,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:38.160411 kubelet[2430]: E0129 11:05:38.160370 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-3-44dff38e5d?timeout=10s\": dial tcp 168.119.110.78:6443: connect: connection refused" interval="800ms" Jan 29 11:05:38.338028 kubelet[2430]: I0129 11:05:38.337874 2430 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:38.338689 kubelet[2430]: E0129 11:05:38.338288 2430 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://168.119.110.78:6443/api/v1/nodes\": dial tcp 168.119.110.78:6443: connect: connection refused" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:38.350036 kubelet[2430]: W0129 11:05:38.349867 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://168.119.110.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-3-44dff38e5d&limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 11:05:38.350488 kubelet[2430]: E0129 11:05:38.350131 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://168.119.110.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-3-44dff38e5d&limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:05:38.488443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113111545.mount: Deactivated successfully. Jan 29 11:05:38.494051 containerd[1512]: time="2025-01-29T11:05:38.493159674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:38.495312 containerd[1512]: time="2025-01-29T11:05:38.495266509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 11:05:38.497448 containerd[1512]: time="2025-01-29T11:05:38.497413224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:38.499203 containerd[1512]: time="2025-01-29T11:05:38.499170140Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:38.501266 containerd[1512]: time="2025-01-29T11:05:38.501192736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:05:38.503119 containerd[1512]: time="2025-01-29T11:05:38.503089532Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:38.504651 containerd[1512]: time="2025-01-29T11:05:38.504611409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:05:38.505667 containerd[1512]: time="2025-01-29T11:05:38.505635406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:05:38.506646 containerd[1512]: time="2025-01-29T11:05:38.506612524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.674419ms" Jan 29 11:05:38.512001 containerd[1512]: time="2025-01-29T11:05:38.511646273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.18782ms" Jan 29 11:05:38.512303 containerd[1512]: time="2025-01-29T11:05:38.512133712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 499.511751ms" Jan 29 11:05:38.624586 containerd[1512]: time="2025-01-29T11:05:38.624412947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:38.625922 containerd[1512]: time="2025-01-29T11:05:38.625835984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:38.626165 containerd[1512]: time="2025-01-29T11:05:38.625907624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:38.626270 containerd[1512]: time="2025-01-29T11:05:38.626148263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:38.632398 containerd[1512]: time="2025-01-29T11:05:38.632285170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:38.632617 containerd[1512]: time="2025-01-29T11:05:38.632579329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:38.632735 containerd[1512]: time="2025-01-29T11:05:38.632676889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:38.632971 containerd[1512]: time="2025-01-29T11:05:38.632910249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:38.634214 containerd[1512]: time="2025-01-29T11:05:38.633947366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:38.634214 containerd[1512]: time="2025-01-29T11:05:38.633998046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:38.634214 containerd[1512]: time="2025-01-29T11:05:38.634013166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:38.634214 containerd[1512]: time="2025-01-29T11:05:38.634079926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:38.656274 systemd[1]: Started cri-containerd-56095cdaa72c8e58e02a49407306019fa9380271ac071c80883bd56395375cff.scope - libcontainer container 56095cdaa72c8e58e02a49407306019fa9380271ac071c80883bd56395375cff. Jan 29 11:05:38.661686 systemd[1]: Started cri-containerd-79ae6459cd3dea8640b510a880f5594834885cea4e2c32fd36d7f2454d0feacc.scope - libcontainer container 79ae6459cd3dea8640b510a880f5594834885cea4e2c32fd36d7f2454d0feacc. Jan 29 11:05:38.671929 systemd[1]: Started cri-containerd-e778add3f9de1dee86eeca07817eb6aa4d225cecb73b37cea9c513fc2cf8d621.scope - libcontainer container e778add3f9de1dee86eeca07817eb6aa4d225cecb73b37cea9c513fc2cf8d621. Jan 29 11:05:38.719481 containerd[1512]: time="2025-01-29T11:05:38.719337220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-3-44dff38e5d,Uid:64ce047490cd19ff34bcd41466b5a7a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e778add3f9de1dee86eeca07817eb6aa4d225cecb73b37cea9c513fc2cf8d621\"" Jan 29 11:05:38.727687 containerd[1512]: time="2025-01-29T11:05:38.726465405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-3-44dff38e5d,Uid:7baf7b995cfa6fa459e39464863f65c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"79ae6459cd3dea8640b510a880f5594834885cea4e2c32fd36d7f2454d0feacc\"" Jan 29 11:05:38.729348 containerd[1512]: time="2025-01-29T11:05:38.729276839Z" level=info msg="CreateContainer within sandbox \"e778add3f9de1dee86eeca07817eb6aa4d225cecb73b37cea9c513fc2cf8d621\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:05:38.730426 containerd[1512]: time="2025-01-29T11:05:38.730382636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-3-44dff38e5d,Uid:e2590c740afd8b27761d8720c559dce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"56095cdaa72c8e58e02a49407306019fa9380271ac071c80883bd56395375cff\"" Jan 29 11:05:38.733071 containerd[1512]: time="2025-01-29T11:05:38.732687151Z" level=info msg="CreateContainer within sandbox \"56095cdaa72c8e58e02a49407306019fa9380271ac071c80883bd56395375cff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:05:38.733266 containerd[1512]: time="2025-01-29T11:05:38.733238070Z" level=info msg="CreateContainer within sandbox \"79ae6459cd3dea8640b510a880f5594834885cea4e2c32fd36d7f2454d0feacc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:05:38.748438 containerd[1512]: time="2025-01-29T11:05:38.748083877Z" level=info msg="CreateContainer within sandbox \"79ae6459cd3dea8640b510a880f5594834885cea4e2c32fd36d7f2454d0feacc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e62b35abd3a64c2cbbaebfed1017bf56cdb8ff7b71b40c7ecd64151d19722331\"" Jan 29 11:05:38.749610 containerd[1512]: time="2025-01-29T11:05:38.749545754Z" level=info msg="StartContainer for \"e62b35abd3a64c2cbbaebfed1017bf56cdb8ff7b71b40c7ecd64151d19722331\"" Jan 29 11:05:38.755294 containerd[1512]: time="2025-01-29T11:05:38.755241622Z" level=info msg="CreateContainer within sandbox \"56095cdaa72c8e58e02a49407306019fa9380271ac071c80883bd56395375cff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b18d58d70b3f295a7935e9ea1f2ca504f215ecf6e7fd9ec5f2cd2fbbe3c555b5\"" Jan 29 11:05:38.756820 containerd[1512]: time="2025-01-29T11:05:38.756348139Z" level=info msg="StartContainer for \"b18d58d70b3f295a7935e9ea1f2ca504f215ecf6e7fd9ec5f2cd2fbbe3c555b5\"" Jan 29 11:05:38.759040 containerd[1512]: time="2025-01-29T11:05:38.759008014Z" level=info msg="CreateContainer within sandbox \"e778add3f9de1dee86eeca07817eb6aa4d225cecb73b37cea9c513fc2cf8d621\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c677200dd5bedb9c95732402e803e305c671ddc9047cadbf13c5ff06a0445fb0\"" Jan 29 11:05:38.760742 containerd[1512]: time="2025-01-29T11:05:38.760688890Z" level=info msg="StartContainer for \"c677200dd5bedb9c95732402e803e305c671ddc9047cadbf13c5ff06a0445fb0\"" Jan 29 11:05:38.787952 systemd[1]: Started cri-containerd-e62b35abd3a64c2cbbaebfed1017bf56cdb8ff7b71b40c7ecd64151d19722331.scope - libcontainer container e62b35abd3a64c2cbbaebfed1017bf56cdb8ff7b71b40c7ecd64151d19722331. Jan 29 11:05:38.805099 systemd[1]: Started cri-containerd-b18d58d70b3f295a7935e9ea1f2ca504f215ecf6e7fd9ec5f2cd2fbbe3c555b5.scope - libcontainer container b18d58d70b3f295a7935e9ea1f2ca504f215ecf6e7fd9ec5f2cd2fbbe3c555b5. Jan 29 11:05:38.807270 systemd[1]: Started cri-containerd-c677200dd5bedb9c95732402e803e305c671ddc9047cadbf13c5ff06a0445fb0.scope - libcontainer container c677200dd5bedb9c95732402e803e305c671ddc9047cadbf13c5ff06a0445fb0. Jan 29 11:05:38.860119 containerd[1512]: time="2025-01-29T11:05:38.859965913Z" level=info msg="StartContainer for \"e62b35abd3a64c2cbbaebfed1017bf56cdb8ff7b71b40c7ecd64151d19722331\" returns successfully" Jan 29 11:05:38.881172 containerd[1512]: time="2025-01-29T11:05:38.880635228Z" level=info msg="StartContainer for \"c677200dd5bedb9c95732402e803e305c671ddc9047cadbf13c5ff06a0445fb0\" returns successfully" Jan 29 11:05:38.881172 containerd[1512]: time="2025-01-29T11:05:38.880744468Z" level=info msg="StartContainer for \"b18d58d70b3f295a7935e9ea1f2ca504f215ecf6e7fd9ec5f2cd2fbbe3c555b5\" returns successfully" Jan 29 11:05:38.917916 kubelet[2430]: W0129 11:05:38.917387 2430 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://168.119.110.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.110.78:6443: connect: connection refused Jan 29 11:05:38.917916 kubelet[2430]: E0129 11:05:38.917460 2430 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://168.119.110.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 168.119.110.78:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:05:38.961728 kubelet[2430]: E0129 11:05:38.961657 2430 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.110.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-3-44dff38e5d?timeout=10s\": dial tcp 168.119.110.78:6443: connect: connection refused" interval="1.6s" Jan 29 11:05:39.141139 kubelet[2430]: I0129 11:05:39.140307 2430 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:39.597366 kubelet[2430]: E0129 11:05:39.597320 2430 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:39.598415 kubelet[2430]: E0129 11:05:39.598394 2430 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:39.601717 kubelet[2430]: E0129 11:05:39.601624 2430 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.604675 kubelet[2430]: E0129 11:05:40.604252 2430 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.604675 kubelet[2430]: E0129 11:05:40.604556 2430 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.836760 kubelet[2430]: E0129 11:05:40.836640 2430 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-3-44dff38e5d\" not found" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.909947 kubelet[2430]: I0129 11:05:40.909880 2430 kubelet_node_status.go:79] "Successfully registered node" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.959033 kubelet[2430]: I0129 11:05:40.958743 2430 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.981317 kubelet[2430]: E0129 11:05:40.981013 2430 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152-2-0-3-44dff38e5d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.981317 kubelet[2430]: I0129 11:05:40.981051 2430 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.987186 kubelet[2430]: E0129 11:05:40.987154 2430 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.987532 kubelet[2430]: I0129 11:05:40.987342 2430 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:40.996048 kubelet[2430]: E0129 11:05:40.996001 2430 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152-2-0-3-44dff38e5d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:41.545122 kubelet[2430]: I0129 11:05:41.544884 2430 apiserver.go:52] "Watching apiserver" Jan 29 11:05:41.558178 kubelet[2430]: I0129 11:05:41.558146 2430 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:05:43.176346 systemd[1]: Reloading requested from client PID 2707 ('systemctl') (unit session-7.scope)... Jan 29 11:05:43.176361 systemd[1]: Reloading... Jan 29 11:05:43.275797 zram_generator::config[2750]: No configuration found. Jan 29 11:05:43.383239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:05:43.477557 systemd[1]: Reloading finished in 300 ms. Jan 29 11:05:43.516537 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:43.528447 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:05:43.528896 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:43.528980 systemd[1]: kubelet.service: Consumed 1.721s CPU time, 124.1M memory peak, 0B memory swap peak. Jan 29 11:05:43.536018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:05:43.674212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:05:43.688354 (kubelet)[2792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:05:43.736139 kubelet[2792]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:43.736139 kubelet[2792]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:05:43.736139 kubelet[2792]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:05:43.736665 kubelet[2792]: I0129 11:05:43.736225 2792 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:05:43.746136 kubelet[2792]: I0129 11:05:43.746103 2792 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:05:43.747296 kubelet[2792]: I0129 11:05:43.746284 2792 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:05:43.747296 kubelet[2792]: I0129 11:05:43.746571 2792 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:05:43.748290 kubelet[2792]: I0129 11:05:43.748241 2792 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:05:43.750844 kubelet[2792]: I0129 11:05:43.750821 2792 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:05:43.755203 kubelet[2792]: E0129 11:05:43.755159 2792 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:05:43.755364 kubelet[2792]: I0129 11:05:43.755342 2792 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:05:43.758866 kubelet[2792]: I0129 11:05:43.758837 2792 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:05:43.759253 kubelet[2792]: I0129 11:05:43.759212 2792 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:05:43.759507 kubelet[2792]: I0129 11:05:43.759326 2792 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-3-44dff38e5d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:05:43.759634 kubelet[2792]: I0129 11:05:43.759620 2792 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:05:43.759695 kubelet[2792]: I0129 11:05:43.759686 2792 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:05:43.759885 kubelet[2792]: I0129 11:05:43.759870 2792 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:43.760098 kubelet[2792]: I0129 11:05:43.760085 2792 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:05:43.760861 kubelet[2792]: I0129 11:05:43.760843 2792 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:05:43.760977 kubelet[2792]: I0129 11:05:43.760966 2792 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:05:43.761089 kubelet[2792]: I0129 11:05:43.761061 2792 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:05:43.768411 kubelet[2792]: I0129 11:05:43.767746 2792 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:05:43.770748 kubelet[2792]: I0129 11:05:43.768369 2792 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:05:43.776382 kubelet[2792]: I0129 11:05:43.776349 2792 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:05:43.776487 kubelet[2792]: I0129 11:05:43.776397 2792 server.go:1287] "Started kubelet" Jan 29 11:05:43.782328 kubelet[2792]: I0129 11:05:43.782295 2792 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:05:43.793021 kubelet[2792]: I0129 11:05:43.792899 2792 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:05:43.793964 kubelet[2792]: I0129 11:05:43.793931 2792 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:05:43.794913 kubelet[2792]: I0129 11:05:43.794897 2792 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:05:43.795280 kubelet[2792]: I0129 11:05:43.795134 2792 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:05:43.795491 kubelet[2792]: I0129 11:05:43.795476 2792 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:05:43.797452 kubelet[2792]: I0129 11:05:43.797385 2792 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:05:43.797611 kubelet[2792]: I0129 11:05:43.797585 2792 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:05:43.798061 kubelet[2792]: I0129 11:05:43.797941 2792 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:05:43.799228 kubelet[2792]: I0129 11:05:43.799193 2792 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:05:43.799486 kubelet[2792]: I0129 11:05:43.799468 2792 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:05:43.803772 kubelet[2792]: I0129 11:05:43.803740 2792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:05:43.805127 kubelet[2792]: I0129 11:05:43.804828 2792 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:05:43.805127 kubelet[2792]: I0129 11:05:43.804849 2792 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:05:43.805127 kubelet[2792]: I0129 11:05:43.804867 2792 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:05:43.805127 kubelet[2792]: I0129 11:05:43.804873 2792 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:05:43.805127 kubelet[2792]: E0129 11:05:43.804909 2792 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:05:43.805444 kubelet[2792]: I0129 11:05:43.805416 2792 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:05:43.873993 kubelet[2792]: I0129 11:05:43.873951 2792 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:05:43.873993 kubelet[2792]: I0129 11:05:43.873971 2792 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:05:43.873993 kubelet[2792]: I0129 11:05:43.873989 2792 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:05:43.874342 kubelet[2792]: I0129 11:05:43.874153 2792 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:05:43.874342 kubelet[2792]: I0129 11:05:43.874176 2792 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:05:43.874342 kubelet[2792]: I0129 11:05:43.874196 2792 policy_none.go:49] "None policy: Start" Jan 29 11:05:43.874342 kubelet[2792]: I0129 11:05:43.874205 2792 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:05:43.874342 kubelet[2792]: I0129 11:05:43.874214 2792 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:05:43.874342 kubelet[2792]: I0129 11:05:43.874311 2792 state_mem.go:75] "Updated machine memory state" Jan 29 11:05:43.879663 kubelet[2792]: I0129 11:05:43.878383 2792 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:05:43.879663 kubelet[2792]: I0129 11:05:43.878552 2792 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:05:43.879663 kubelet[2792]: I0129 11:05:43.878563 2792 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:05:43.879663 kubelet[2792]: I0129 11:05:43.878855 2792 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:05:43.884449 kubelet[2792]: E0129 11:05:43.884202 2792 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:05:43.906160 kubelet[2792]: I0129 11:05:43.906130 2792 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:43.906831 kubelet[2792]: I0129 11:05:43.906425 2792 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:43.907157 kubelet[2792]: I0129 11:05:43.906515 2792 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:43.986846 kubelet[2792]: I0129 11:05:43.985937 2792 kubelet_node_status.go:76] "Attempting to register node" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:43.997677 kubelet[2792]: I0129 11:05:43.997268 2792 kubelet_node_status.go:125] "Node was previously registered" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:43.997677 kubelet[2792]: I0129 11:05:43.997372 2792 kubelet_node_status.go:79] "Successfully registered node" node="ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100282 kubelet[2792]: I0129 11:05:44.099775 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100282 kubelet[2792]: I0129 11:05:44.099839 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64ce047490cd19ff34bcd41466b5a7a4-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-3-44dff38e5d\" (UID: \"64ce047490cd19ff34bcd41466b5a7a4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100282 kubelet[2792]: I0129 11:05:44.099876 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64ce047490cd19ff34bcd41466b5a7a4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-3-44dff38e5d\" (UID: \"64ce047490cd19ff34bcd41466b5a7a4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100282 kubelet[2792]: I0129 11:05:44.099921 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100282 kubelet[2792]: I0129 11:05:44.099958 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100682 kubelet[2792]: I0129 11:05:44.099991 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64ce047490cd19ff34bcd41466b5a7a4-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-3-44dff38e5d\" (UID: \"64ce047490cd19ff34bcd41466b5a7a4\") " pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100682 kubelet[2792]: I0129 11:05:44.100024 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100682 kubelet[2792]: I0129 11:05:44.100059 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7baf7b995cfa6fa459e39464863f65c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-3-44dff38e5d\" (UID: \"7baf7b995cfa6fa459e39464863f65c4\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.100682 kubelet[2792]: I0129 11:05:44.100095 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e2590c740afd8b27761d8720c559dce5-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-3-44dff38e5d\" (UID: \"e2590c740afd8b27761d8720c559dce5\") " pod="kube-system/kube-scheduler-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.177230 sudo[2823]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:05:44.177507 sudo[2823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:05:44.624839 sudo[2823]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:44.764763 kubelet[2792]: I0129 11:05:44.764711 2792 apiserver.go:52] "Watching apiserver" Jan 29 11:05:44.798626 kubelet[2792]: I0129 11:05:44.798506 2792 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:05:44.843738 kubelet[2792]: I0129 11:05:44.843659 2792 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.844277 kubelet[2792]: I0129 11:05:44.844253 2792 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.856627 kubelet[2792]: E0129 11:05:44.856437 2792 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4152-2-0-3-44dff38e5d\" already exists" pod="kube-system/kube-scheduler-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.858571 kubelet[2792]: E0129 11:05:44.858271 2792 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4152-2-0-3-44dff38e5d\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" Jan 29 11:05:44.873663 kubelet[2792]: I0129 11:05:44.873510 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-3-44dff38e5d" podStartSLOduration=1.8734946799999999 podStartE2EDuration="1.87349468s" podCreationTimestamp="2025-01-29 11:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:44.872361882 +0000 UTC m=+1.179297164" watchObservedRunningTime="2025-01-29 11:05:44.87349468 +0000 UTC m=+1.180429962" Jan 29 11:05:44.899255 kubelet[2792]: I0129 11:05:44.898807 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-3-44dff38e5d" podStartSLOduration=1.8987913189999999 podStartE2EDuration="1.898791319s" podCreationTimestamp="2025-01-29 11:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:44.884922221 +0000 UTC m=+1.191857503" watchObservedRunningTime="2025-01-29 11:05:44.898791319 +0000 UTC m=+1.205726601" Jan 29 11:05:44.899255 kubelet[2792]: I0129 11:05:44.898895 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-3-44dff38e5d" podStartSLOduration=1.8988894379999999 podStartE2EDuration="1.898889438s" podCreationTimestamp="2025-01-29 11:05:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:44.898361599 +0000 UTC m=+1.205296921" watchObservedRunningTime="2025-01-29 11:05:44.898889438 +0000 UTC m=+1.205824720" Jan 29 11:05:46.291982 sudo[1905]: pam_unix(sudo:session): session closed for user root Jan 29 11:05:46.449982 sshd[1904]: Connection closed by 147.75.109.163 port 41882 Jan 29 11:05:46.450913 sshd-session[1902]: pam_unix(sshd:session): session closed for user core Jan 29 11:05:46.456579 systemd[1]: sshd@6-168.119.110.78:22-147.75.109.163:41882.service: Deactivated successfully. Jan 29 11:05:46.460517 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:05:46.460844 systemd[1]: session-7.scope: Consumed 6.751s CPU time, 154.0M memory peak, 0B memory swap peak. Jan 29 11:05:46.461607 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:05:46.462592 systemd-logind[1493]: Removed session 7. Jan 29 11:05:50.249947 kubelet[2792]: I0129 11:05:50.249909 2792 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:05:50.250559 containerd[1512]: time="2025-01-29T11:05:50.250523472Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:05:50.252515 kubelet[2792]: I0129 11:05:50.250831 2792 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:05:51.188880 systemd[1]: Created slice kubepods-besteffort-pode6005a35_29ed_441f_8318_df4c6cff9b54.slice - libcontainer container kubepods-besteffort-pode6005a35_29ed_441f_8318_df4c6cff9b54.slice. Jan 29 11:05:51.228757 systemd[1]: Created slice kubepods-burstable-pod402dd2fc_1dd8_4f51_8ae9_025541aebcbb.slice - libcontainer container kubepods-burstable-pod402dd2fc_1dd8_4f51_8ae9_025541aebcbb.slice. Jan 29 11:05:51.243338 kubelet[2792]: I0129 11:05:51.243295 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-host-proc-sys-kernel\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243338 kubelet[2792]: I0129 11:05:51.243338 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-hostproc\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243507 kubelet[2792]: I0129 11:05:51.243356 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-cgroup\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243507 kubelet[2792]: I0129 11:05:51.243371 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-config-path\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243507 kubelet[2792]: I0129 11:05:51.243390 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4qr\" (UniqueName: \"kubernetes.io/projected/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-kube-api-access-hv4qr\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243507 kubelet[2792]: I0129 11:05:51.243407 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-run\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243507 kubelet[2792]: I0129 11:05:51.243445 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cni-path\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243507 kubelet[2792]: I0129 11:05:51.243462 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6005a35-29ed-441f-8318-df4c6cff9b54-lib-modules\") pod \"kube-proxy-n595m\" (UID: \"e6005a35-29ed-441f-8318-df4c6cff9b54\") " pod="kube-system/kube-proxy-n595m" Jan 29 11:05:51.243640 kubelet[2792]: I0129 11:05:51.243477 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-xtables-lock\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243640 kubelet[2792]: I0129 11:05:51.243492 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-host-proc-sys-net\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243640 kubelet[2792]: I0129 11:05:51.243509 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-bpf-maps\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243640 kubelet[2792]: I0129 11:05:51.243522 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-etc-cni-netd\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243640 kubelet[2792]: I0129 11:05:51.243539 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6005a35-29ed-441f-8318-df4c6cff9b54-kube-proxy\") pod \"kube-proxy-n595m\" (UID: \"e6005a35-29ed-441f-8318-df4c6cff9b54\") " pod="kube-system/kube-proxy-n595m" Jan 29 11:05:51.243640 kubelet[2792]: I0129 11:05:51.243553 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wc8\" (UniqueName: \"kubernetes.io/projected/e6005a35-29ed-441f-8318-df4c6cff9b54-kube-api-access-p7wc8\") pod \"kube-proxy-n595m\" (UID: \"e6005a35-29ed-441f-8318-df4c6cff9b54\") " pod="kube-system/kube-proxy-n595m" Jan 29 11:05:51.243784 kubelet[2792]: I0129 11:05:51.243568 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-clustermesh-secrets\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243784 kubelet[2792]: I0129 11:05:51.243583 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-hubble-tls\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243784 kubelet[2792]: I0129 11:05:51.243598 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-lib-modules\") pod \"cilium-gz78t\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " pod="kube-system/cilium-gz78t" Jan 29 11:05:51.243784 kubelet[2792]: I0129 11:05:51.243615 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6005a35-29ed-441f-8318-df4c6cff9b54-xtables-lock\") pod \"kube-proxy-n595m\" (UID: \"e6005a35-29ed-441f-8318-df4c6cff9b54\") " pod="kube-system/kube-proxy-n595m" Jan 29 11:05:51.372432 systemd[1]: Created slice kubepods-besteffort-pod7508022a_d326_4090_982b_2f0bc1f4d77c.slice - libcontainer container kubepods-besteffort-pod7508022a_d326_4090_982b_2f0bc1f4d77c.slice. Jan 29 11:05:51.446558 kubelet[2792]: I0129 11:05:51.446271 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7508022a-d326-4090-982b-2f0bc1f4d77c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-tvntd\" (UID: \"7508022a-d326-4090-982b-2f0bc1f4d77c\") " pod="kube-system/cilium-operator-6c4d7847fc-tvntd" Jan 29 11:05:51.446558 kubelet[2792]: I0129 11:05:51.446375 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv2gp\" (UniqueName: \"kubernetes.io/projected/7508022a-d326-4090-982b-2f0bc1f4d77c-kube-api-access-hv2gp\") pod \"cilium-operator-6c4d7847fc-tvntd\" (UID: \"7508022a-d326-4090-982b-2f0bc1f4d77c\") " pod="kube-system/cilium-operator-6c4d7847fc-tvntd" Jan 29 11:05:51.504477 containerd[1512]: time="2025-01-29T11:05:51.504399215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n595m,Uid:e6005a35-29ed-441f-8318-df4c6cff9b54,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:51.533902 containerd[1512]: time="2025-01-29T11:05:51.533488703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gz78t,Uid:402dd2fc-1dd8-4f51-8ae9-025541aebcbb,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:51.533902 containerd[1512]: time="2025-01-29T11:05:51.533199823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:51.533902 containerd[1512]: time="2025-01-29T11:05:51.533261543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:51.533902 containerd[1512]: time="2025-01-29T11:05:51.533272143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:51.533902 containerd[1512]: time="2025-01-29T11:05:51.533345263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:51.556456 systemd[1]: Started cri-containerd-119cb6f7dffb852b8788c55147717e0fd657987c3015d8173f3b2b04c11e77b5.scope - libcontainer container 119cb6f7dffb852b8788c55147717e0fd657987c3015d8173f3b2b04c11e77b5. Jan 29 11:05:51.569955 containerd[1512]: time="2025-01-29T11:05:51.569638742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:51.569955 containerd[1512]: time="2025-01-29T11:05:51.569725262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:51.569955 containerd[1512]: time="2025-01-29T11:05:51.569754622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:51.571025 containerd[1512]: time="2025-01-29T11:05:51.570050102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:51.589442 containerd[1512]: time="2025-01-29T11:05:51.589335200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n595m,Uid:e6005a35-29ed-441f-8318-df4c6cff9b54,Namespace:kube-system,Attempt:0,} returns sandbox id \"119cb6f7dffb852b8788c55147717e0fd657987c3015d8173f3b2b04c11e77b5\"" Jan 29 11:05:51.593246 containerd[1512]: time="2025-01-29T11:05:51.592946636Z" level=info msg="CreateContainer within sandbox \"119cb6f7dffb852b8788c55147717e0fd657987c3015d8173f3b2b04c11e77b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:05:51.602902 systemd[1]: Started cri-containerd-e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f.scope - libcontainer container e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f. Jan 29 11:05:51.613150 containerd[1512]: time="2025-01-29T11:05:51.613110694Z" level=info msg="CreateContainer within sandbox \"119cb6f7dffb852b8788c55147717e0fd657987c3015d8173f3b2b04c11e77b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"046dfb74a1d311859a568c4b810bab40c3045bc1f43bd07e291955e55f800b7b\"" Jan 29 11:05:51.616283 containerd[1512]: time="2025-01-29T11:05:51.616227170Z" level=info msg="StartContainer for \"046dfb74a1d311859a568c4b810bab40c3045bc1f43bd07e291955e55f800b7b\"" Jan 29 11:05:51.642613 containerd[1512]: time="2025-01-29T11:05:51.642491821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gz78t,Uid:402dd2fc-1dd8-4f51-8ae9-025541aebcbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\"" Jan 29 11:05:51.646402 containerd[1512]: time="2025-01-29T11:05:51.645871977Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:05:51.657159 systemd[1]: Started cri-containerd-046dfb74a1d311859a568c4b810bab40c3045bc1f43bd07e291955e55f800b7b.scope - libcontainer container 046dfb74a1d311859a568c4b810bab40c3045bc1f43bd07e291955e55f800b7b. Jan 29 11:05:51.678365 containerd[1512]: time="2025-01-29T11:05:51.677003862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tvntd,Uid:7508022a-d326-4090-982b-2f0bc1f4d77c,Namespace:kube-system,Attempt:0,}" Jan 29 11:05:51.695859 containerd[1512]: time="2025-01-29T11:05:51.695815601Z" level=info msg="StartContainer for \"046dfb74a1d311859a568c4b810bab40c3045bc1f43bd07e291955e55f800b7b\" returns successfully" Jan 29 11:05:51.709265 containerd[1512]: time="2025-01-29T11:05:51.708393347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:05:51.709265 containerd[1512]: time="2025-01-29T11:05:51.708459307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:05:51.709265 containerd[1512]: time="2025-01-29T11:05:51.708474467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:51.709265 containerd[1512]: time="2025-01-29T11:05:51.708559427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:05:51.729880 systemd[1]: Started cri-containerd-d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7.scope - libcontainer container d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7. Jan 29 11:05:51.777973 containerd[1512]: time="2025-01-29T11:05:51.777922869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-tvntd,Uid:7508022a-d326-4090-982b-2f0bc1f4d77c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\"" Jan 29 11:05:51.879160 kubelet[2792]: I0129 11:05:51.879051 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n595m" podStartSLOduration=0.879006155 podStartE2EDuration="879.006155ms" podCreationTimestamp="2025-01-29 11:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:05:51.879007355 +0000 UTC m=+8.185942717" watchObservedRunningTime="2025-01-29 11:05:51.879006155 +0000 UTC m=+8.185941437" Jan 29 11:05:59.003624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390258187.mount: Deactivated successfully. Jan 29 11:06:00.437217 containerd[1512]: time="2025-01-29T11:06:00.436239597Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:06:00.438560 containerd[1512]: time="2025-01-29T11:06:00.438516235Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:06:00.439773 containerd[1512]: time="2025-01-29T11:06:00.439741114Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:06:00.441609 containerd[1512]: time="2025-01-29T11:06:00.441571633Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.795382056s" Jan 29 11:06:00.441736 containerd[1512]: time="2025-01-29T11:06:00.441716793Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:06:00.444286 containerd[1512]: time="2025-01-29T11:06:00.444259232Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:06:00.445251 containerd[1512]: time="2025-01-29T11:06:00.445207191Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:06:00.471336 containerd[1512]: time="2025-01-29T11:06:00.471293855Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565\"" Jan 29 11:06:00.472457 containerd[1512]: time="2025-01-29T11:06:00.471894735Z" level=info msg="StartContainer for \"93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565\"" Jan 29 11:06:00.500675 systemd[1]: run-containerd-runc-k8s.io-93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565-runc.83poDz.mount: Deactivated successfully. Jan 29 11:06:00.513008 systemd[1]: Started cri-containerd-93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565.scope - libcontainer container 93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565. Jan 29 11:06:00.542577 containerd[1512]: time="2025-01-29T11:06:00.542537092Z" level=info msg="StartContainer for \"93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565\" returns successfully" Jan 29 11:06:00.560978 systemd[1]: cri-containerd-93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565.scope: Deactivated successfully. Jan 29 11:06:00.750790 containerd[1512]: time="2025-01-29T11:06:00.749871607Z" level=info msg="shim disconnected" id=93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565 namespace=k8s.io Jan 29 11:06:00.750790 containerd[1512]: time="2025-01-29T11:06:00.749991887Z" level=warning msg="cleaning up after shim disconnected" id=93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565 namespace=k8s.io Jan 29 11:06:00.750790 containerd[1512]: time="2025-01-29T11:06:00.750004607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:00.763428 containerd[1512]: time="2025-01-29T11:06:00.763357239Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:06:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:06:00.894850 containerd[1512]: time="2025-01-29T11:06:00.894807319Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:06:00.910958 containerd[1512]: time="2025-01-29T11:06:00.910876389Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e\"" Jan 29 11:06:00.915853 containerd[1512]: time="2025-01-29T11:06:00.914156387Z" level=info msg="StartContainer for \"64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e\"" Jan 29 11:06:00.943088 systemd[1]: Started cri-containerd-64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e.scope - libcontainer container 64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e. Jan 29 11:06:00.971559 containerd[1512]: time="2025-01-29T11:06:00.971466232Z" level=info msg="StartContainer for \"64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e\" returns successfully" Jan 29 11:06:00.981833 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:06:00.982656 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:06:00.982951 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:06:00.991130 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:06:00.991338 systemd[1]: cri-containerd-64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e.scope: Deactivated successfully. Jan 29 11:06:01.011203 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:06:01.029737 containerd[1512]: time="2025-01-29T11:06:01.029619719Z" level=info msg="shim disconnected" id=64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e namespace=k8s.io Jan 29 11:06:01.029737 containerd[1512]: time="2025-01-29T11:06:01.029684439Z" level=warning msg="cleaning up after shim disconnected" id=64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e namespace=k8s.io Jan 29 11:06:01.029737 containerd[1512]: time="2025-01-29T11:06:01.029694799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:01.457414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565-rootfs.mount: Deactivated successfully. Jan 29 11:06:01.898863 containerd[1512]: time="2025-01-29T11:06:01.898556475Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:06:01.919616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501769314.mount: Deactivated successfully. Jan 29 11:06:01.921088 containerd[1512]: time="2025-01-29T11:06:01.921025422Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036\"" Jan 29 11:06:01.923145 containerd[1512]: time="2025-01-29T11:06:01.922987181Z" level=info msg="StartContainer for \"883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036\"" Jan 29 11:06:01.960962 systemd[1]: Started cri-containerd-883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036.scope - libcontainer container 883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036. Jan 29 11:06:01.995548 containerd[1512]: time="2025-01-29T11:06:01.995487301Z" level=info msg="StartContainer for \"883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036\" returns successfully" Jan 29 11:06:01.999284 systemd[1]: cri-containerd-883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036.scope: Deactivated successfully. Jan 29 11:06:02.030131 containerd[1512]: time="2025-01-29T11:06:02.029985323Z" level=info msg="shim disconnected" id=883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036 namespace=k8s.io Jan 29 11:06:02.030131 containerd[1512]: time="2025-01-29T11:06:02.030070363Z" level=warning msg="cleaning up after shim disconnected" id=883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036 namespace=k8s.io Jan 29 11:06:02.030131 containerd[1512]: time="2025-01-29T11:06:02.030083803Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:02.459142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036-rootfs.mount: Deactivated successfully. Jan 29 11:06:02.903219 containerd[1512]: time="2025-01-29T11:06:02.903166438Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:06:02.923771 containerd[1512]: time="2025-01-29T11:06:02.923697468Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47\"" Jan 29 11:06:02.924795 containerd[1512]: time="2025-01-29T11:06:02.924764667Z" level=info msg="StartContainer for \"df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47\"" Jan 29 11:06:02.956916 systemd[1]: Started cri-containerd-df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47.scope - libcontainer container df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47. Jan 29 11:06:02.990495 systemd[1]: cri-containerd-df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47.scope: Deactivated successfully. Jan 29 11:06:02.995728 containerd[1512]: time="2025-01-29T11:06:02.995565111Z" level=info msg="StartContainer for \"df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47\" returns successfully" Jan 29 11:06:03.017639 containerd[1512]: time="2025-01-29T11:06:03.017540381Z" level=info msg="shim disconnected" id=df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47 namespace=k8s.io Jan 29 11:06:03.018078 containerd[1512]: time="2025-01-29T11:06:03.017738660Z" level=warning msg="cleaning up after shim disconnected" id=df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47 namespace=k8s.io Jan 29 11:06:03.018078 containerd[1512]: time="2025-01-29T11:06:03.017761900Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:06:03.029781 containerd[1512]: time="2025-01-29T11:06:03.028824615Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:06:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:06:03.458841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47-rootfs.mount: Deactivated successfully. Jan 29 11:06:03.912162 containerd[1512]: time="2025-01-29T11:06:03.912120926Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:06:03.943754 containerd[1512]: time="2025-01-29T11:06:03.943625151Z" level=info msg="CreateContainer within sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\"" Jan 29 11:06:03.945314 containerd[1512]: time="2025-01-29T11:06:03.945200710Z" level=info msg="StartContainer for \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\"" Jan 29 11:06:03.976914 systemd[1]: Started cri-containerd-18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe.scope - libcontainer container 18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe. Jan 29 11:06:04.006102 containerd[1512]: time="2025-01-29T11:06:04.006051122Z" level=info msg="StartContainer for \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\" returns successfully" Jan 29 11:06:04.130248 kubelet[2792]: I0129 11:06:04.129975 2792 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:06:04.176251 systemd[1]: Created slice kubepods-burstable-podfd4f8c42_4acb_4426_8cf8_854be01785d8.slice - libcontainer container kubepods-burstable-podfd4f8c42_4acb_4426_8cf8_854be01785d8.slice. Jan 29 11:06:04.186333 systemd[1]: Created slice kubepods-burstable-podbcbaae86_8786_42f5_a867_52b6477cd4fa.slice - libcontainer container kubepods-burstable-podbcbaae86_8786_42f5_a867_52b6477cd4fa.slice. Jan 29 11:06:04.236659 kubelet[2792]: I0129 11:06:04.236628 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcbaae86-8786-42f5-a867-52b6477cd4fa-config-volume\") pod \"coredns-668d6bf9bc-hxlqp\" (UID: \"bcbaae86-8786-42f5-a867-52b6477cd4fa\") " pod="kube-system/coredns-668d6bf9bc-hxlqp" Jan 29 11:06:04.237097 kubelet[2792]: I0129 11:06:04.237002 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cqjs\" (UniqueName: \"kubernetes.io/projected/fd4f8c42-4acb-4426-8cf8-854be01785d8-kube-api-access-6cqjs\") pod \"coredns-668d6bf9bc-9dq5h\" (UID: \"fd4f8c42-4acb-4426-8cf8-854be01785d8\") " pod="kube-system/coredns-668d6bf9bc-9dq5h" Jan 29 11:06:04.237097 kubelet[2792]: I0129 11:06:04.237032 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7nh\" (UniqueName: \"kubernetes.io/projected/bcbaae86-8786-42f5-a867-52b6477cd4fa-kube-api-access-hj7nh\") pod \"coredns-668d6bf9bc-hxlqp\" (UID: \"bcbaae86-8786-42f5-a867-52b6477cd4fa\") " pod="kube-system/coredns-668d6bf9bc-hxlqp" Jan 29 11:06:04.237097 kubelet[2792]: I0129 11:06:04.237052 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd4f8c42-4acb-4426-8cf8-854be01785d8-config-volume\") pod \"coredns-668d6bf9bc-9dq5h\" (UID: \"fd4f8c42-4acb-4426-8cf8-854be01785d8\") " pod="kube-system/coredns-668d6bf9bc-9dq5h" Jan 29 11:06:04.484250 containerd[1512]: time="2025-01-29T11:06:04.483858162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9dq5h,Uid:fd4f8c42-4acb-4426-8cf8-854be01785d8,Namespace:kube-system,Attempt:0,}" Jan 29 11:06:04.490504 containerd[1512]: time="2025-01-29T11:06:04.490175759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hxlqp,Uid:bcbaae86-8786-42f5-a867-52b6477cd4fa,Namespace:kube-system,Attempt:0,}" Jan 29 11:06:05.888961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990323548.mount: Deactivated successfully. Jan 29 11:06:06.491766 containerd[1512]: time="2025-01-29T11:06:06.490904204Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:06:06.492533 containerd[1512]: time="2025-01-29T11:06:06.492483964Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:06:06.493514 containerd[1512]: time="2025-01-29T11:06:06.493405083Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:06:06.495754 containerd[1512]: time="2025-01-29T11:06:06.495373443Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 6.050771051s" Jan 29 11:06:06.495754 containerd[1512]: time="2025-01-29T11:06:06.495430723Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:06:06.498123 containerd[1512]: time="2025-01-29T11:06:06.497936202Z" level=info msg="CreateContainer within sandbox \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:06:06.521298 containerd[1512]: time="2025-01-29T11:06:06.521224194Z" level=info msg="CreateContainer within sandbox \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\"" Jan 29 11:06:06.523483 containerd[1512]: time="2025-01-29T11:06:06.522001874Z" level=info msg="StartContainer for \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\"" Jan 29 11:06:06.551929 systemd[1]: Started cri-containerd-dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d.scope - libcontainer container dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d. Jan 29 11:06:06.581732 containerd[1512]: time="2025-01-29T11:06:06.581593414Z" level=info msg="StartContainer for \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\" returns successfully" Jan 29 11:06:06.948131 kubelet[2792]: I0129 11:06:06.947739 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gz78t" podStartSLOduration=7.148573157 podStartE2EDuration="15.947722531s" podCreationTimestamp="2025-01-29 11:05:51 +0000 UTC" firstStartedPulling="2025-01-29 11:05:51.644456498 +0000 UTC m=+7.951391780" lastFinishedPulling="2025-01-29 11:06:00.443605872 +0000 UTC m=+16.750541154" observedRunningTime="2025-01-29 11:06:04.934657373 +0000 UTC m=+21.241592695" watchObservedRunningTime="2025-01-29 11:06:06.947722531 +0000 UTC m=+23.254657813" Jan 29 11:06:06.949222 kubelet[2792]: I0129 11:06:06.948905 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-tvntd" podStartSLOduration=1.231845996 podStartE2EDuration="15.948892651s" podCreationTimestamp="2025-01-29 11:05:51 +0000 UTC" firstStartedPulling="2025-01-29 11:05:51.779333187 +0000 UTC m=+8.086268429" lastFinishedPulling="2025-01-29 11:06:06.496379762 +0000 UTC m=+22.803315084" observedRunningTime="2025-01-29 11:06:06.946673411 +0000 UTC m=+23.253608693" watchObservedRunningTime="2025-01-29 11:06:06.948892651 +0000 UTC m=+23.255827933" Jan 29 11:06:10.224497 systemd-networkd[1376]: cilium_host: Link UP Jan 29 11:06:10.225137 systemd-networkd[1376]: cilium_net: Link UP Jan 29 11:06:10.225142 systemd-networkd[1376]: cilium_net: Gained carrier Jan 29 11:06:10.225685 systemd-networkd[1376]: cilium_host: Gained carrier Jan 29 11:06:10.326101 systemd-networkd[1376]: cilium_vxlan: Link UP Jan 29 11:06:10.326108 systemd-networkd[1376]: cilium_vxlan: Gained carrier Jan 29 11:06:10.596092 kernel: NET: Registered PF_ALG protocol family Jan 29 11:06:10.856915 systemd-networkd[1376]: cilium_net: Gained IPv6LL Jan 29 11:06:11.177008 systemd-networkd[1376]: cilium_host: Gained IPv6LL Jan 29 11:06:11.339754 systemd-networkd[1376]: lxc_health: Link UP Jan 29 11:06:11.340235 systemd-networkd[1376]: lxc_health: Gained carrier Jan 29 11:06:11.590898 systemd-networkd[1376]: lxc721a59eaaed8: Link UP Jan 29 11:06:11.609561 systemd-networkd[1376]: lxc550884de30fe: Link UP Jan 29 11:06:11.613768 kernel: eth0: renamed from tmp352cc Jan 29 11:06:11.622746 kernel: eth0: renamed from tmpc4f71 Jan 29 11:06:11.626870 systemd-networkd[1376]: lxc721a59eaaed8: Gained carrier Jan 29 11:06:11.629474 systemd-networkd[1376]: lxc550884de30fe: Gained carrier Jan 29 11:06:11.944879 systemd-networkd[1376]: cilium_vxlan: Gained IPv6LL Jan 29 11:06:13.032957 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 29 11:06:13.225093 systemd-networkd[1376]: lxc721a59eaaed8: Gained IPv6LL Jan 29 11:06:13.481030 systemd-networkd[1376]: lxc550884de30fe: Gained IPv6LL Jan 29 11:06:15.386199 containerd[1512]: time="2025-01-29T11:06:15.386124656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:06:15.386889 containerd[1512]: time="2025-01-29T11:06:15.386579056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:06:15.386889 containerd[1512]: time="2025-01-29T11:06:15.386617216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:15.386889 containerd[1512]: time="2025-01-29T11:06:15.386753536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:15.424214 systemd[1]: run-containerd-runc-k8s.io-352cc8f9cc4b995945677094c9a49967a6feec8694357bdfd678ffac533e8a83-runc.oA49um.mount: Deactivated successfully. Jan 29 11:06:15.431731 containerd[1512]: time="2025-01-29T11:06:15.429982655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:06:15.431731 containerd[1512]: time="2025-01-29T11:06:15.430052295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:06:15.431731 containerd[1512]: time="2025-01-29T11:06:15.430068615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:15.431731 containerd[1512]: time="2025-01-29T11:06:15.430381575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:06:15.435924 systemd[1]: Started cri-containerd-352cc8f9cc4b995945677094c9a49967a6feec8694357bdfd678ffac533e8a83.scope - libcontainer container 352cc8f9cc4b995945677094c9a49967a6feec8694357bdfd678ffac533e8a83. Jan 29 11:06:15.476307 systemd[1]: Started cri-containerd-c4f71c7feb899ae37a48db3212fbfc4a899dbd2970e7d37381ab46e550db7ed7.scope - libcontainer container c4f71c7feb899ae37a48db3212fbfc4a899dbd2970e7d37381ab46e550db7ed7. Jan 29 11:06:15.499733 containerd[1512]: time="2025-01-29T11:06:15.497964134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hxlqp,Uid:bcbaae86-8786-42f5-a867-52b6477cd4fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"352cc8f9cc4b995945677094c9a49967a6feec8694357bdfd678ffac533e8a83\"" Jan 29 11:06:15.504218 containerd[1512]: time="2025-01-29T11:06:15.504168014Z" level=info msg="CreateContainer within sandbox \"352cc8f9cc4b995945677094c9a49967a6feec8694357bdfd678ffac533e8a83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:06:15.524566 containerd[1512]: time="2025-01-29T11:06:15.524511574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9dq5h,Uid:fd4f8c42-4acb-4426-8cf8-854be01785d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4f71c7feb899ae37a48db3212fbfc4a899dbd2970e7d37381ab46e550db7ed7\"" Jan 29 11:06:15.529837 containerd[1512]: time="2025-01-29T11:06:15.529560934Z" level=info msg="CreateContainer within sandbox \"c4f71c7feb899ae37a48db3212fbfc4a899dbd2970e7d37381ab46e550db7ed7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:06:15.531118 containerd[1512]: time="2025-01-29T11:06:15.530161174Z" level=info msg="CreateContainer within sandbox \"352cc8f9cc4b995945677094c9a49967a6feec8694357bdfd678ffac533e8a83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"45ac4f031ac888ee8caaa3cddb940919335b5d5300729dd2777484e5c645fd79\"" Jan 29 11:06:15.531750 containerd[1512]: time="2025-01-29T11:06:15.531661574Z" level=info msg="StartContainer for \"45ac4f031ac888ee8caaa3cddb940919335b5d5300729dd2777484e5c645fd79\"" Jan 29 11:06:15.554073 containerd[1512]: time="2025-01-29T11:06:15.554002293Z" level=info msg="CreateContainer within sandbox \"c4f71c7feb899ae37a48db3212fbfc4a899dbd2970e7d37381ab46e550db7ed7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9afa4fca3803b00170ed9a8a6d996e36cf5af9e1fb020aae5c16662f8aa8513f\"" Jan 29 11:06:15.555473 containerd[1512]: time="2025-01-29T11:06:15.554626533Z" level=info msg="StartContainer for \"9afa4fca3803b00170ed9a8a6d996e36cf5af9e1fb020aae5c16662f8aa8513f\"" Jan 29 11:06:15.567540 systemd[1]: Started cri-containerd-45ac4f031ac888ee8caaa3cddb940919335b5d5300729dd2777484e5c645fd79.scope - libcontainer container 45ac4f031ac888ee8caaa3cddb940919335b5d5300729dd2777484e5c645fd79. Jan 29 11:06:15.594948 systemd[1]: Started cri-containerd-9afa4fca3803b00170ed9a8a6d996e36cf5af9e1fb020aae5c16662f8aa8513f.scope - libcontainer container 9afa4fca3803b00170ed9a8a6d996e36cf5af9e1fb020aae5c16662f8aa8513f. Jan 29 11:06:15.616052 containerd[1512]: time="2025-01-29T11:06:15.615999252Z" level=info msg="StartContainer for \"45ac4f031ac888ee8caaa3cddb940919335b5d5300729dd2777484e5c645fd79\" returns successfully" Jan 29 11:06:15.630200 containerd[1512]: time="2025-01-29T11:06:15.630042012Z" level=info msg="StartContainer for \"9afa4fca3803b00170ed9a8a6d996e36cf5af9e1fb020aae5c16662f8aa8513f\" returns successfully" Jan 29 11:06:15.976336 kubelet[2792]: I0129 11:06:15.976263 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hxlqp" podStartSLOduration=24.976246247 podStartE2EDuration="24.976246247s" podCreationTimestamp="2025-01-29 11:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:06:15.974698727 +0000 UTC m=+32.281634009" watchObservedRunningTime="2025-01-29 11:06:15.976246247 +0000 UTC m=+32.283181529" Jan 29 11:06:16.028149 kubelet[2792]: I0129 11:06:16.027547 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9dq5h" podStartSLOduration=25.027528447 podStartE2EDuration="25.027528447s" podCreationTimestamp="2025-01-29 11:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:06:15.993139247 +0000 UTC m=+32.300074569" watchObservedRunningTime="2025-01-29 11:06:16.027528447 +0000 UTC m=+32.334463729" Jan 29 11:10:33.288196 systemd[1]: Started sshd@8-168.119.110.78:22-147.75.109.163:56786.service - OpenSSH per-connection server daemon (147.75.109.163:56786). Jan 29 11:10:34.289455 sshd[4206]: Accepted publickey for core from 147.75.109.163 port 56786 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:34.291738 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:34.297212 systemd-logind[1493]: New session 8 of user core. Jan 29 11:10:34.308014 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:10:35.065528 sshd[4208]: Connection closed by 147.75.109.163 port 56786 Jan 29 11:10:35.066587 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:35.070684 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:10:35.070952 systemd[1]: sshd@8-168.119.110.78:22-147.75.109.163:56786.service: Deactivated successfully. Jan 29 11:10:35.072585 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:10:35.075582 systemd-logind[1493]: Removed session 8. Jan 29 11:10:40.244021 systemd[1]: Started sshd@9-168.119.110.78:22-147.75.109.163:46318.service - OpenSSH per-connection server daemon (147.75.109.163:46318). Jan 29 11:10:41.234736 sshd[4219]: Accepted publickey for core from 147.75.109.163 port 46318 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:41.236385 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:41.241977 systemd-logind[1493]: New session 9 of user core. Jan 29 11:10:41.246924 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:10:41.995897 sshd[4221]: Connection closed by 147.75.109.163 port 46318 Jan 29 11:10:41.996917 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:42.002490 systemd[1]: sshd@9-168.119.110.78:22-147.75.109.163:46318.service: Deactivated successfully. Jan 29 11:10:42.005259 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:10:42.007818 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:10:42.009075 systemd-logind[1493]: Removed session 9. Jan 29 11:10:47.171281 systemd[1]: Started sshd@10-168.119.110.78:22-147.75.109.163:46330.service - OpenSSH per-connection server daemon (147.75.109.163:46330). Jan 29 11:10:48.146685 sshd[4235]: Accepted publickey for core from 147.75.109.163 port 46330 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:48.148654 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:48.154483 systemd-logind[1493]: New session 10 of user core. Jan 29 11:10:48.159912 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:10:48.897056 sshd[4237]: Connection closed by 147.75.109.163 port 46330 Jan 29 11:10:48.896875 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:48.902849 systemd[1]: sshd@10-168.119.110.78:22-147.75.109.163:46330.service: Deactivated successfully. Jan 29 11:10:48.906389 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:10:48.907861 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:10:48.910473 systemd-logind[1493]: Removed session 10. Jan 29 11:10:49.075042 systemd[1]: Started sshd@11-168.119.110.78:22-147.75.109.163:59844.service - OpenSSH per-connection server daemon (147.75.109.163:59844). Jan 29 11:10:50.082015 sshd[4249]: Accepted publickey for core from 147.75.109.163 port 59844 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:50.084042 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:50.088484 systemd-logind[1493]: New session 11 of user core. Jan 29 11:10:50.102084 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:10:50.886796 sshd[4251]: Connection closed by 147.75.109.163 port 59844 Jan 29 11:10:50.887482 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:50.891772 systemd[1]: sshd@11-168.119.110.78:22-147.75.109.163:59844.service: Deactivated successfully. Jan 29 11:10:50.893664 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:10:50.895315 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:10:50.896688 systemd-logind[1493]: Removed session 11. Jan 29 11:10:51.065095 systemd[1]: Started sshd@12-168.119.110.78:22-147.75.109.163:59846.service - OpenSSH per-connection server daemon (147.75.109.163:59846). Jan 29 11:10:52.054387 sshd[4260]: Accepted publickey for core from 147.75.109.163 port 59846 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:52.056464 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:52.061937 systemd-logind[1493]: New session 12 of user core. Jan 29 11:10:52.072050 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:10:52.813632 sshd[4264]: Connection closed by 147.75.109.163 port 59846 Jan 29 11:10:52.815841 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:52.820404 systemd[1]: sshd@12-168.119.110.78:22-147.75.109.163:59846.service: Deactivated successfully. Jan 29 11:10:52.823236 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:10:52.825067 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:10:52.826256 systemd-logind[1493]: Removed session 12. Jan 29 11:10:57.992209 systemd[1]: Started sshd@13-168.119.110.78:22-147.75.109.163:52292.service - OpenSSH per-connection server daemon (147.75.109.163:52292). Jan 29 11:10:59.004596 sshd[4275]: Accepted publickey for core from 147.75.109.163 port 52292 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:59.007132 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:59.013016 systemd-logind[1493]: New session 13 of user core. Jan 29 11:10:59.021021 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:10:59.761646 sshd[4277]: Connection closed by 147.75.109.163 port 52292 Jan 29 11:10:59.762391 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:59.766573 systemd[1]: sshd@13-168.119.110.78:22-147.75.109.163:52292.service: Deactivated successfully. Jan 29 11:10:59.769338 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:10:59.770572 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:10:59.771848 systemd-logind[1493]: Removed session 13. Jan 29 11:10:59.939229 systemd[1]: Started sshd@14-168.119.110.78:22-147.75.109.163:52300.service - OpenSSH per-connection server daemon (147.75.109.163:52300). Jan 29 11:11:00.924695 sshd[4287]: Accepted publickey for core from 147.75.109.163 port 52300 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:00.927229 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:00.933576 systemd-logind[1493]: New session 14 of user core. Jan 29 11:11:00.936876 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:11:01.717381 sshd[4289]: Connection closed by 147.75.109.163 port 52300 Jan 29 11:11:01.718276 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:01.722401 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:11:01.723200 systemd[1]: sshd@14-168.119.110.78:22-147.75.109.163:52300.service: Deactivated successfully. Jan 29 11:11:01.725452 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:11:01.727013 systemd-logind[1493]: Removed session 14. Jan 29 11:11:01.897272 systemd[1]: Started sshd@15-168.119.110.78:22-147.75.109.163:52310.service - OpenSSH per-connection server daemon (147.75.109.163:52310). Jan 29 11:11:02.884062 sshd[4298]: Accepted publickey for core from 147.75.109.163 port 52310 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:02.886140 sshd-session[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:02.891348 systemd-logind[1493]: New session 15 of user core. Jan 29 11:11:02.900648 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:11:04.558330 sshd[4300]: Connection closed by 147.75.109.163 port 52310 Jan 29 11:11:04.560054 sshd-session[4298]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:04.566771 systemd[1]: sshd@15-168.119.110.78:22-147.75.109.163:52310.service: Deactivated successfully. Jan 29 11:11:04.569919 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:11:04.570823 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:11:04.572047 systemd-logind[1493]: Removed session 15. Jan 29 11:11:04.730296 systemd[1]: Started sshd@16-168.119.110.78:22-147.75.109.163:52320.service - OpenSSH per-connection server daemon (147.75.109.163:52320). Jan 29 11:11:05.706409 sshd[4317]: Accepted publickey for core from 147.75.109.163 port 52320 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:05.708312 sshd-session[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:05.713942 systemd-logind[1493]: New session 16 of user core. Jan 29 11:11:05.720161 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:11:06.591788 sshd[4319]: Connection closed by 147.75.109.163 port 52320 Jan 29 11:11:06.592651 sshd-session[4317]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:06.597747 systemd[1]: sshd@16-168.119.110.78:22-147.75.109.163:52320.service: Deactivated successfully. Jan 29 11:11:06.599790 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:11:06.603633 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:11:06.605360 systemd-logind[1493]: Removed session 16. Jan 29 11:11:06.774172 systemd[1]: Started sshd@17-168.119.110.78:22-147.75.109.163:52322.service - OpenSSH per-connection server daemon (147.75.109.163:52322). Jan 29 11:11:07.776834 sshd[4328]: Accepted publickey for core from 147.75.109.163 port 52322 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:07.778884 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:07.784032 systemd-logind[1493]: New session 17 of user core. Jan 29 11:11:07.788910 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:11:08.528933 sshd[4330]: Connection closed by 147.75.109.163 port 52322 Jan 29 11:11:08.530502 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:08.534977 systemd[1]: sshd@17-168.119.110.78:22-147.75.109.163:52322.service: Deactivated successfully. Jan 29 11:11:08.537557 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:11:08.538978 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:11:08.540177 systemd-logind[1493]: Removed session 17. Jan 29 11:11:13.703038 systemd[1]: Started sshd@18-168.119.110.78:22-147.75.109.163:38240.service - OpenSSH per-connection server daemon (147.75.109.163:38240). Jan 29 11:11:14.691067 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 38240 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:14.693288 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:14.702940 systemd-logind[1493]: New session 18 of user core. Jan 29 11:11:14.709952 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:11:15.432818 sshd[4346]: Connection closed by 147.75.109.163 port 38240 Jan 29 11:11:15.435928 sshd-session[4344]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:15.439563 systemd[1]: sshd@18-168.119.110.78:22-147.75.109.163:38240.service: Deactivated successfully. Jan 29 11:11:15.441875 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:11:15.443934 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:11:15.445460 systemd-logind[1493]: Removed session 18. Jan 29 11:11:20.610237 systemd[1]: Started sshd@19-168.119.110.78:22-147.75.109.163:59206.service - OpenSSH per-connection server daemon (147.75.109.163:59206). Jan 29 11:11:21.586396 sshd[4357]: Accepted publickey for core from 147.75.109.163 port 59206 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:21.589284 sshd-session[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:21.598974 systemd-logind[1493]: New session 19 of user core. Jan 29 11:11:21.604095 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:11:22.330429 sshd[4359]: Connection closed by 147.75.109.163 port 59206 Jan 29 11:11:22.331404 sshd-session[4357]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:22.336324 systemd[1]: sshd@19-168.119.110.78:22-147.75.109.163:59206.service: Deactivated successfully. Jan 29 11:11:22.338942 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:11:22.340531 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:11:22.342292 systemd-logind[1493]: Removed session 19. Jan 29 11:11:22.510084 systemd[1]: Started sshd@20-168.119.110.78:22-147.75.109.163:59216.service - OpenSSH per-connection server daemon (147.75.109.163:59216). Jan 29 11:11:23.510946 sshd[4372]: Accepted publickey for core from 147.75.109.163 port 59216 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:23.513158 sshd-session[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:23.517482 systemd-logind[1493]: New session 20 of user core. Jan 29 11:11:23.524917 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:11:25.496632 containerd[1512]: time="2025-01-29T11:11:25.496559261Z" level=info msg="StopContainer for \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\" with timeout 30 (s)" Jan 29 11:11:25.499145 containerd[1512]: time="2025-01-29T11:11:25.498752303Z" level=info msg="Stop container \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\" with signal terminated" Jan 29 11:11:25.510662 systemd[1]: run-containerd-runc-k8s.io-18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe-runc.9S0fg9.mount: Deactivated successfully. Jan 29 11:11:25.513169 systemd[1]: cri-containerd-dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d.scope: Deactivated successfully. Jan 29 11:11:25.527643 containerd[1512]: time="2025-01-29T11:11:25.527479411Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:11:25.540543 containerd[1512]: time="2025-01-29T11:11:25.540153263Z" level=info msg="StopContainer for \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\" with timeout 2 (s)" Jan 29 11:11:25.543655 containerd[1512]: time="2025-01-29T11:11:25.543560586Z" level=info msg="Stop container \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\" with signal terminated" Jan 29 11:11:25.547446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d-rootfs.mount: Deactivated successfully. Jan 29 11:11:25.554644 systemd-networkd[1376]: lxc_health: Link DOWN Jan 29 11:11:25.554653 systemd-networkd[1376]: lxc_health: Lost carrier Jan 29 11:11:25.563991 containerd[1512]: time="2025-01-29T11:11:25.563922806Z" level=info msg="shim disconnected" id=dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d namespace=k8s.io Jan 29 11:11:25.563991 containerd[1512]: time="2025-01-29T11:11:25.563994726Z" level=warning msg="cleaning up after shim disconnected" id=dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d namespace=k8s.io Jan 29 11:11:25.564140 containerd[1512]: time="2025-01-29T11:11:25.564005366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:25.571950 systemd[1]: cri-containerd-18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe.scope: Deactivated successfully. Jan 29 11:11:25.572692 systemd[1]: cri-containerd-18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe.scope: Consumed 7.549s CPU time. Jan 29 11:11:25.582512 containerd[1512]: time="2025-01-29T11:11:25.582446863Z" level=info msg="StopContainer for \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\" returns successfully" Jan 29 11:11:25.583497 containerd[1512]: time="2025-01-29T11:11:25.583454504Z" level=info msg="StopPodSandbox for \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\"" Jan 29 11:11:25.583564 containerd[1512]: time="2025-01-29T11:11:25.583507584Z" level=info msg="Container to stop \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.586225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7-shm.mount: Deactivated successfully. Jan 29 11:11:25.595473 systemd[1]: cri-containerd-d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7.scope: Deactivated successfully. Jan 29 11:11:25.602038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe-rootfs.mount: Deactivated successfully. Jan 29 11:11:25.608608 containerd[1512]: time="2025-01-29T11:11:25.608221808Z" level=info msg="shim disconnected" id=18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe namespace=k8s.io Jan 29 11:11:25.608608 containerd[1512]: time="2025-01-29T11:11:25.608451488Z" level=warning msg="cleaning up after shim disconnected" id=18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe namespace=k8s.io Jan 29 11:11:25.608608 containerd[1512]: time="2025-01-29T11:11:25.608462728Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:25.636651 containerd[1512]: time="2025-01-29T11:11:25.636602075Z" level=info msg="StopContainer for \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\" returns successfully" Jan 29 11:11:25.637114 containerd[1512]: time="2025-01-29T11:11:25.637063475Z" level=info msg="shim disconnected" id=d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7 namespace=k8s.io Jan 29 11:11:25.637114 containerd[1512]: time="2025-01-29T11:11:25.637113435Z" level=warning msg="cleaning up after shim disconnected" id=d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7 namespace=k8s.io Jan 29 11:11:25.637195 containerd[1512]: time="2025-01-29T11:11:25.637123075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:25.638571 containerd[1512]: time="2025-01-29T11:11:25.638418197Z" level=info msg="StopPodSandbox for \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\"" Jan 29 11:11:25.638571 containerd[1512]: time="2025-01-29T11:11:25.638456197Z" level=info msg="Container to stop \"93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.638571 containerd[1512]: time="2025-01-29T11:11:25.638466797Z" level=info msg="Container to stop \"64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.638571 containerd[1512]: time="2025-01-29T11:11:25.638476637Z" level=info msg="Container to stop \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.638571 containerd[1512]: time="2025-01-29T11:11:25.638485357Z" level=info msg="Container to stop \"883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.638571 containerd[1512]: time="2025-01-29T11:11:25.638493717Z" level=info msg="Container to stop \"df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:11:25.644256 systemd[1]: cri-containerd-e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f.scope: Deactivated successfully. Jan 29 11:11:25.662237 containerd[1512]: time="2025-01-29T11:11:25.662087219Z" level=info msg="TearDown network for sandbox \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\" successfully" Jan 29 11:11:25.662237 containerd[1512]: time="2025-01-29T11:11:25.662119819Z" level=info msg="StopPodSandbox for \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\" returns successfully" Jan 29 11:11:25.690241 containerd[1512]: time="2025-01-29T11:11:25.689885286Z" level=info msg="shim disconnected" id=e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f namespace=k8s.io Jan 29 11:11:25.690241 containerd[1512]: time="2025-01-29T11:11:25.690214646Z" level=warning msg="cleaning up after shim disconnected" id=e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f namespace=k8s.io Jan 29 11:11:25.690241 containerd[1512]: time="2025-01-29T11:11:25.690225366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:25.705481 containerd[1512]: time="2025-01-29T11:11:25.705418780Z" level=info msg="TearDown network for sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" successfully" Jan 29 11:11:25.705481 containerd[1512]: time="2025-01-29T11:11:25.705457541Z" level=info msg="StopPodSandbox for \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" returns successfully" Jan 29 11:11:25.710762 kubelet[2792]: I0129 11:11:25.710657 2792 scope.go:117] "RemoveContainer" containerID="dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d" Jan 29 11:11:25.716412 containerd[1512]: time="2025-01-29T11:11:25.715885550Z" level=info msg="RemoveContainer for \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\"" Jan 29 11:11:25.719580 kubelet[2792]: I0129 11:11:25.719532 2792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f" Jan 29 11:11:25.726357 containerd[1512]: time="2025-01-29T11:11:25.726272840Z" level=info msg="RemoveContainer for \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\" returns successfully" Jan 29 11:11:25.726930 kubelet[2792]: I0129 11:11:25.726903 2792 scope.go:117] "RemoveContainer" containerID="dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d" Jan 29 11:11:25.727210 containerd[1512]: time="2025-01-29T11:11:25.727169521Z" level=error msg="ContainerStatus for \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\": not found" Jan 29 11:11:25.727673 kubelet[2792]: E0129 11:11:25.727337 2792 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\": not found" containerID="dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d" Jan 29 11:11:25.727673 kubelet[2792]: I0129 11:11:25.727370 2792 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d"} err="failed to get container status \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd8314d3d92e2b108506f564ad904ca66b68215c7635aa216846b2e13a3ad35d\": not found" Jan 29 11:11:25.785023 kubelet[2792]: I0129 11:11:25.784381 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv2gp\" (UniqueName: \"kubernetes.io/projected/7508022a-d326-4090-982b-2f0bc1f4d77c-kube-api-access-hv2gp\") pod \"7508022a-d326-4090-982b-2f0bc1f4d77c\" (UID: \"7508022a-d326-4090-982b-2f0bc1f4d77c\") " Jan 29 11:11:25.785023 kubelet[2792]: I0129 11:11:25.784459 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7508022a-d326-4090-982b-2f0bc1f4d77c-cilium-config-path\") pod \"7508022a-d326-4090-982b-2f0bc1f4d77c\" (UID: \"7508022a-d326-4090-982b-2f0bc1f4d77c\") " Jan 29 11:11:25.788378 kubelet[2792]: I0129 11:11:25.788281 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7508022a-d326-4090-982b-2f0bc1f4d77c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7508022a-d326-4090-982b-2f0bc1f4d77c" (UID: "7508022a-d326-4090-982b-2f0bc1f4d77c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:11:25.788503 kubelet[2792]: I0129 11:11:25.788416 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7508022a-d326-4090-982b-2f0bc1f4d77c-kube-api-access-hv2gp" (OuterVolumeSpecName: "kube-api-access-hv2gp") pod "7508022a-d326-4090-982b-2f0bc1f4d77c" (UID: "7508022a-d326-4090-982b-2f0bc1f4d77c"). InnerVolumeSpecName "kube-api-access-hv2gp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:11:25.816890 systemd[1]: Removed slice kubepods-besteffort-pod7508022a_d326_4090_982b_2f0bc1f4d77c.slice - libcontainer container kubepods-besteffort-pod7508022a_d326_4090_982b_2f0bc1f4d77c.slice. Jan 29 11:11:25.885991 kubelet[2792]: I0129 11:11:25.884691 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-config-path\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.885991 kubelet[2792]: I0129 11:11:25.884812 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-run\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.885991 kubelet[2792]: I0129 11:11:25.884853 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-host-proc-sys-kernel\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.885991 kubelet[2792]: I0129 11:11:25.884926 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv4qr\" (UniqueName: \"kubernetes.io/projected/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-kube-api-access-hv4qr\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.885991 kubelet[2792]: I0129 11:11:25.884963 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-xtables-lock\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.885991 kubelet[2792]: I0129 11:11:25.884992 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-lib-modules\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886421 kubelet[2792]: I0129 11:11:25.885027 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-cgroup\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886421 kubelet[2792]: I0129 11:11:25.885060 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-etc-cni-netd\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886421 kubelet[2792]: I0129 11:11:25.885096 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-hubble-tls\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886421 kubelet[2792]: I0129 11:11:25.885138 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-clustermesh-secrets\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886421 kubelet[2792]: I0129 11:11:25.885170 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-host-proc-sys-net\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886421 kubelet[2792]: I0129 11:11:25.885199 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-bpf-maps\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886759 kubelet[2792]: I0129 11:11:25.885229 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-hostproc\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886759 kubelet[2792]: I0129 11:11:25.885266 2792 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cni-path\") pod \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\" (UID: \"402dd2fc-1dd8-4f51-8ae9-025541aebcbb\") " Jan 29 11:11:25.886759 kubelet[2792]: I0129 11:11:25.885336 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hv2gp\" (UniqueName: \"kubernetes.io/projected/7508022a-d326-4090-982b-2f0bc1f4d77c-kube-api-access-hv2gp\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.886759 kubelet[2792]: I0129 11:11:25.885356 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7508022a-d326-4090-982b-2f0bc1f4d77c-cilium-config-path\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.886759 kubelet[2792]: I0129 11:11:25.885411 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cni-path" (OuterVolumeSpecName: "cni-path") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.886759 kubelet[2792]: I0129 11:11:25.885465 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.887064 kubelet[2792]: I0129 11:11:25.885495 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.887064 kubelet[2792]: I0129 11:11:25.886571 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 11:11:25.889764 kubelet[2792]: I0129 11:11:25.889272 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:11:25.889764 kubelet[2792]: I0129 11:11:25.889327 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.889764 kubelet[2792]: I0129 11:11:25.889347 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.889764 kubelet[2792]: I0129 11:11:25.889363 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.889764 kubelet[2792]: I0129 11:11:25.889382 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.890119 kubelet[2792]: I0129 11:11:25.889403 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.890488 kubelet[2792]: I0129 11:11:25.890424 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-hostproc" (OuterVolumeSpecName: "hostproc") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.890691 kubelet[2792]: I0129 11:11:25.890351 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 29 11:11:25.891871 kubelet[2792]: I0129 11:11:25.891833 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-kube-api-access-hv4qr" (OuterVolumeSpecName: "kube-api-access-hv4qr") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "kube-api-access-hv4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 11:11:25.892471 kubelet[2792]: I0129 11:11:25.892449 2792 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "402dd2fc-1dd8-4f51-8ae9-025541aebcbb" (UID: "402dd2fc-1dd8-4f51-8ae9-025541aebcbb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 11:11:25.986541 kubelet[2792]: I0129 11:11:25.986473 2792 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hv4qr\" (UniqueName: \"kubernetes.io/projected/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-kube-api-access-hv4qr\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986541 kubelet[2792]: I0129 11:11:25.986539 2792 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-xtables-lock\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986831 kubelet[2792]: I0129 11:11:25.986565 2792 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-lib-modules\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986831 kubelet[2792]: I0129 11:11:25.986585 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-cgroup\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986831 kubelet[2792]: I0129 11:11:25.986606 2792 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-etc-cni-netd\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986831 kubelet[2792]: I0129 11:11:25.986624 2792 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-hubble-tls\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986831 kubelet[2792]: I0129 11:11:25.986643 2792 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-clustermesh-secrets\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986831 kubelet[2792]: I0129 11:11:25.986663 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-host-proc-sys-net\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986831 kubelet[2792]: I0129 11:11:25.986689 2792 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-bpf-maps\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.986831 kubelet[2792]: I0129 11:11:25.986741 2792 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-hostproc\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.987195 kubelet[2792]: I0129 11:11:25.986764 2792 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cni-path\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.987195 kubelet[2792]: I0129 11:11:25.986783 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-config-path\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.987195 kubelet[2792]: I0129 11:11:25.986817 2792 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-cilium-run\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:25.987195 kubelet[2792]: I0129 11:11:25.986837 2792 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/402dd2fc-1dd8-4f51-8ae9-025541aebcbb-host-proc-sys-kernel\") on node \"ci-4152-2-0-3-44dff38e5d\" DevicePath \"\"" Jan 29 11:11:26.503470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7-rootfs.mount: Deactivated successfully. Jan 29 11:11:26.503618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f-rootfs.mount: Deactivated successfully. Jan 29 11:11:26.503789 systemd[1]: var-lib-kubelet-pods-7508022a\x2dd326\x2d4090\x2d982b\x2d2f0bc1f4d77c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhv2gp.mount: Deactivated successfully. Jan 29 11:11:26.503942 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f-shm.mount: Deactivated successfully. Jan 29 11:11:26.504048 systemd[1]: var-lib-kubelet-pods-402dd2fc\x2d1dd8\x2d4f51\x2d8ae9\x2d025541aebcbb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhv4qr.mount: Deactivated successfully. Jan 29 11:11:26.504134 systemd[1]: var-lib-kubelet-pods-402dd2fc\x2d1dd8\x2d4f51\x2d8ae9\x2d025541aebcbb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:11:26.504240 systemd[1]: var-lib-kubelet-pods-402dd2fc\x2d1dd8\x2d4f51\x2d8ae9\x2d025541aebcbb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:11:26.732796 systemd[1]: Removed slice kubepods-burstable-pod402dd2fc_1dd8_4f51_8ae9_025541aebcbb.slice - libcontainer container kubepods-burstable-pod402dd2fc_1dd8_4f51_8ae9_025541aebcbb.slice. Jan 29 11:11:26.732901 systemd[1]: kubepods-burstable-pod402dd2fc_1dd8_4f51_8ae9_025541aebcbb.slice: Consumed 7.643s CPU time. Jan 29 11:11:27.587347 sshd[4374]: Connection closed by 147.75.109.163 port 59216 Jan 29 11:11:27.588126 sshd-session[4372]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:27.592389 systemd[1]: sshd@20-168.119.110.78:22-147.75.109.163:59216.service: Deactivated successfully. Jan 29 11:11:27.594408 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:11:27.596537 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:11:27.598000 systemd-logind[1493]: Removed session 20. Jan 29 11:11:27.762083 systemd[1]: Started sshd@21-168.119.110.78:22-147.75.109.163:54184.service - OpenSSH per-connection server daemon (147.75.109.163:54184). Jan 29 11:11:27.810268 kubelet[2792]: I0129 11:11:27.809042 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="402dd2fc-1dd8-4f51-8ae9-025541aebcbb" path="/var/lib/kubelet/pods/402dd2fc-1dd8-4f51-8ae9-025541aebcbb/volumes" Jan 29 11:11:27.810268 kubelet[2792]: I0129 11:11:27.809888 2792 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7508022a-d326-4090-982b-2f0bc1f4d77c" path="/var/lib/kubelet/pods/7508022a-d326-4090-982b-2f0bc1f4d77c/volumes" Jan 29 11:11:28.758563 sshd[4531]: Accepted publickey for core from 147.75.109.163 port 54184 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:28.760941 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:28.765806 systemd-logind[1493]: New session 21 of user core. Jan 29 11:11:28.769876 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:11:28.985507 kubelet[2792]: E0129 11:11:28.985357 2792 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:11:30.372262 kubelet[2792]: I0129 11:11:30.370223 2792 memory_manager.go:355] "RemoveStaleState removing state" podUID="402dd2fc-1dd8-4f51-8ae9-025541aebcbb" containerName="cilium-agent" Jan 29 11:11:30.372262 kubelet[2792]: I0129 11:11:30.370259 2792 memory_manager.go:355] "RemoveStaleState removing state" podUID="7508022a-d326-4090-982b-2f0bc1f4d77c" containerName="cilium-operator" Jan 29 11:11:30.381665 systemd[1]: Created slice kubepods-burstable-pod087daaf4_8456_4e9e_95a2_e3be6f80fb1d.slice - libcontainer container kubepods-burstable-pod087daaf4_8456_4e9e_95a2_e3be6f80fb1d.slice. Jan 29 11:11:30.519013 kubelet[2792]: I0129 11:11:30.518368 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-cilium-config-path\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519013 kubelet[2792]: I0129 11:11:30.518438 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-host-proc-sys-net\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519013 kubelet[2792]: I0129 11:11:30.518484 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-hostproc\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519013 kubelet[2792]: I0129 11:11:30.518521 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-xtables-lock\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519013 kubelet[2792]: I0129 11:11:30.518558 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvzv4\" (UniqueName: \"kubernetes.io/projected/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-kube-api-access-lvzv4\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519013 kubelet[2792]: I0129 11:11:30.518595 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-lib-modules\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519490 kubelet[2792]: I0129 11:11:30.518627 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-host-proc-sys-kernel\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519490 kubelet[2792]: I0129 11:11:30.518665 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-etc-cni-netd\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519490 kubelet[2792]: I0129 11:11:30.518764 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-clustermesh-secrets\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519490 kubelet[2792]: I0129 11:11:30.518809 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-cni-path\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519490 kubelet[2792]: I0129 11:11:30.518846 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-cilium-run\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519490 kubelet[2792]: I0129 11:11:30.518887 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-cilium-cgroup\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519908 kubelet[2792]: I0129 11:11:30.518926 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-cilium-ipsec-secrets\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519908 kubelet[2792]: I0129 11:11:30.518968 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-bpf-maps\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.519908 kubelet[2792]: I0129 11:11:30.519056 2792 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/087daaf4-8456-4e9e-95a2-e3be6f80fb1d-hubble-tls\") pod \"cilium-l6nxt\" (UID: \"087daaf4-8456-4e9e-95a2-e3be6f80fb1d\") " pod="kube-system/cilium-l6nxt" Jan 29 11:11:30.551967 sshd[4533]: Connection closed by 147.75.109.163 port 54184 Jan 29 11:11:30.554033 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:30.561789 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:11:30.562251 systemd[1]: sshd@21-168.119.110.78:22-147.75.109.163:54184.service: Deactivated successfully. Jan 29 11:11:30.566653 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:11:30.568848 systemd-logind[1493]: Removed session 21. Jan 29 11:11:30.690097 containerd[1512]: time="2025-01-29T11:11:30.689964014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l6nxt,Uid:087daaf4-8456-4e9e-95a2-e3be6f80fb1d,Namespace:kube-system,Attempt:0,}" Jan 29 11:11:30.712338 containerd[1512]: time="2025-01-29T11:11:30.712129915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:11:30.712338 containerd[1512]: time="2025-01-29T11:11:30.712178795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:11:30.712338 containerd[1512]: time="2025-01-29T11:11:30.712190235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:30.712338 containerd[1512]: time="2025-01-29T11:11:30.712265515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:30.732901 systemd[1]: Started cri-containerd-5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b.scope - libcontainer container 5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b. Jan 29 11:11:30.735146 systemd[1]: Started sshd@22-168.119.110.78:22-147.75.109.163:54188.service - OpenSSH per-connection server daemon (147.75.109.163:54188). Jan 29 11:11:30.768094 containerd[1512]: time="2025-01-29T11:11:30.768019488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l6nxt,Uid:087daaf4-8456-4e9e-95a2-e3be6f80fb1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\"" Jan 29 11:11:30.774095 containerd[1512]: time="2025-01-29T11:11:30.774056534Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:11:30.790990 containerd[1512]: time="2025-01-29T11:11:30.790904510Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d29180915c6c68a372b4a28c58ec51bef7c10f235bb18b12c71e66eaff34aefc\"" Jan 29 11:11:30.791655 containerd[1512]: time="2025-01-29T11:11:30.791582951Z" level=info msg="StartContainer for \"d29180915c6c68a372b4a28c58ec51bef7c10f235bb18b12c71e66eaff34aefc\"" Jan 29 11:11:30.821492 systemd[1]: Started cri-containerd-d29180915c6c68a372b4a28c58ec51bef7c10f235bb18b12c71e66eaff34aefc.scope - libcontainer container d29180915c6c68a372b4a28c58ec51bef7c10f235bb18b12c71e66eaff34aefc. Jan 29 11:11:30.858940 containerd[1512]: time="2025-01-29T11:11:30.858850415Z" level=info msg="StartContainer for \"d29180915c6c68a372b4a28c58ec51bef7c10f235bb18b12c71e66eaff34aefc\" returns successfully" Jan 29 11:11:30.872145 systemd[1]: cri-containerd-d29180915c6c68a372b4a28c58ec51bef7c10f235bb18b12c71e66eaff34aefc.scope: Deactivated successfully. Jan 29 11:11:30.904291 containerd[1512]: time="2025-01-29T11:11:30.903942258Z" level=info msg="shim disconnected" id=d29180915c6c68a372b4a28c58ec51bef7c10f235bb18b12c71e66eaff34aefc namespace=k8s.io Jan 29 11:11:30.904291 containerd[1512]: time="2025-01-29T11:11:30.904031498Z" level=warning msg="cleaning up after shim disconnected" id=d29180915c6c68a372b4a28c58ec51bef7c10f235bb18b12c71e66eaff34aefc namespace=k8s.io Jan 29 11:11:30.904291 containerd[1512]: time="2025-01-29T11:11:30.904050338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:31.631464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2855420606.mount: Deactivated successfully. Jan 29 11:11:31.737979 sshd[4576]: Accepted publickey for core from 147.75.109.163 port 54188 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:31.740033 sshd-session[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:31.746257 containerd[1512]: time="2025-01-29T11:11:31.746220941Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:11:31.750541 systemd-logind[1493]: New session 22 of user core. Jan 29 11:11:31.753918 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:11:31.763499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2053120185.mount: Deactivated successfully. Jan 29 11:11:31.771909 containerd[1512]: time="2025-01-29T11:11:31.771748245Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39\"" Jan 29 11:11:31.776074 containerd[1512]: time="2025-01-29T11:11:31.773405007Z" level=info msg="StartContainer for \"881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39\"" Jan 29 11:11:31.808036 systemd[1]: Started cri-containerd-881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39.scope - libcontainer container 881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39. Jan 29 11:11:31.840599 containerd[1512]: time="2025-01-29T11:11:31.839823750Z" level=info msg="StartContainer for \"881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39\" returns successfully" Jan 29 11:11:31.852988 systemd[1]: cri-containerd-881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39.scope: Deactivated successfully. Jan 29 11:11:31.877735 containerd[1512]: time="2025-01-29T11:11:31.877512986Z" level=info msg="shim disconnected" id=881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39 namespace=k8s.io Jan 29 11:11:31.877735 containerd[1512]: time="2025-01-29T11:11:31.877565906Z" level=warning msg="cleaning up after shim disconnected" id=881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39 namespace=k8s.io Jan 29 11:11:31.877735 containerd[1512]: time="2025-01-29T11:11:31.877575066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:32.412746 sshd[4659]: Connection closed by 147.75.109.163 port 54188 Jan 29 11:11:32.413483 sshd-session[4576]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:32.417444 systemd[1]: sshd@22-168.119.110.78:22-147.75.109.163:54188.service: Deactivated successfully. Jan 29 11:11:32.420188 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:11:32.421348 systemd-logind[1493]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:11:32.422483 systemd-logind[1493]: Removed session 22. Jan 29 11:11:32.596906 systemd[1]: Started sshd@23-168.119.110.78:22-147.75.109.163:54200.service - OpenSSH per-connection server daemon (147.75.109.163:54200). Jan 29 11:11:32.630473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-881ae33bc46b778a44a3b95c3a40854e21bc0f1e6ec1f0a9b20828cb84884f39-rootfs.mount: Deactivated successfully. Jan 29 11:11:32.750386 containerd[1512]: time="2025-01-29T11:11:32.749831458Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:11:32.773198 containerd[1512]: time="2025-01-29T11:11:32.773153080Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139\"" Jan 29 11:11:32.774421 containerd[1512]: time="2025-01-29T11:11:32.774388722Z" level=info msg="StartContainer for \"57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139\"" Jan 29 11:11:32.811978 systemd[1]: Started cri-containerd-57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139.scope - libcontainer container 57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139. Jan 29 11:11:32.858321 containerd[1512]: time="2025-01-29T11:11:32.857483801Z" level=info msg="StartContainer for \"57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139\" returns successfully" Jan 29 11:11:32.864056 systemd[1]: cri-containerd-57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139.scope: Deactivated successfully. Jan 29 11:11:32.889740 containerd[1512]: time="2025-01-29T11:11:32.889405431Z" level=info msg="shim disconnected" id=57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139 namespace=k8s.io Jan 29 11:11:32.889740 containerd[1512]: time="2025-01-29T11:11:32.889480391Z" level=warning msg="cleaning up after shim disconnected" id=57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139 namespace=k8s.io Jan 29 11:11:32.889740 containerd[1512]: time="2025-01-29T11:11:32.889495551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:33.591484 sshd[4725]: Accepted publickey for core from 147.75.109.163 port 54200 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:11:33.593609 sshd-session[4725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:11:33.598385 systemd-logind[1493]: New session 23 of user core. Jan 29 11:11:33.606577 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:11:33.631495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57e167674a801a4e66f188d5aae0cb41994676ceeaf37bfd706999ef06fdf139-rootfs.mount: Deactivated successfully. Jan 29 11:11:33.755575 containerd[1512]: time="2025-01-29T11:11:33.755534617Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:11:33.779035 containerd[1512]: time="2025-01-29T11:11:33.778990000Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544\"" Jan 29 11:11:33.781309 containerd[1512]: time="2025-01-29T11:11:33.781060122Z" level=info msg="StartContainer for \"74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544\"" Jan 29 11:11:33.806725 kubelet[2792]: E0129 11:11:33.806440 2792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-9dq5h" podUID="fd4f8c42-4acb-4426-8cf8-854be01785d8" Jan 29 11:11:33.815918 systemd[1]: Started cri-containerd-74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544.scope - libcontainer container 74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544. Jan 29 11:11:33.851144 containerd[1512]: time="2025-01-29T11:11:33.850715548Z" level=info msg="StartContainer for \"74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544\" returns successfully" Jan 29 11:11:33.853936 systemd[1]: cri-containerd-74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544.scope: Deactivated successfully. Jan 29 11:11:33.888595 containerd[1512]: time="2025-01-29T11:11:33.888472104Z" level=info msg="shim disconnected" id=74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544 namespace=k8s.io Jan 29 11:11:33.888595 containerd[1512]: time="2025-01-29T11:11:33.888560864Z" level=warning msg="cleaning up after shim disconnected" id=74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544 namespace=k8s.io Jan 29 11:11:33.888595 containerd[1512]: time="2025-01-29T11:11:33.888578784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:33.987157 kubelet[2792]: E0129 11:11:33.987105 2792 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:11:34.632770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74906882437fcade7fd524cd70cf683dd78d3eaf2dff76113d2a2b17ae750544-rootfs.mount: Deactivated successfully. Jan 29 11:11:34.762398 containerd[1512]: time="2025-01-29T11:11:34.762356737Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:11:34.791502 containerd[1512]: time="2025-01-29T11:11:34.790093844Z" level=info msg="CreateContainer within sandbox \"5a06ce5fbff457cb3c9e29995d05c72c8e3779ed07685f9039a55c8a1a10cd8b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a3ad86e4fd1daea7bb3a76e0aa25e82cc140df51c80360df1cef6174e0a67c3\"" Jan 29 11:11:34.791689 containerd[1512]: time="2025-01-29T11:11:34.791642925Z" level=info msg="StartContainer for \"0a3ad86e4fd1daea7bb3a76e0aa25e82cc140df51c80360df1cef6174e0a67c3\"" Jan 29 11:11:34.825970 systemd[1]: Started cri-containerd-0a3ad86e4fd1daea7bb3a76e0aa25e82cc140df51c80360df1cef6174e0a67c3.scope - libcontainer container 0a3ad86e4fd1daea7bb3a76e0aa25e82cc140df51c80360df1cef6174e0a67c3. Jan 29 11:11:34.868221 containerd[1512]: time="2025-01-29T11:11:34.868171558Z" level=info msg="StartContainer for \"0a3ad86e4fd1daea7bb3a76e0aa25e82cc140df51c80360df1cef6174e0a67c3\" returns successfully" Jan 29 11:11:35.154739 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:11:35.787134 kubelet[2792]: I0129 11:11:35.785538 2792 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l6nxt" podStartSLOduration=5.785503153 podStartE2EDuration="5.785503153s" podCreationTimestamp="2025-01-29 11:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:11:35.785506433 +0000 UTC m=+352.092441755" watchObservedRunningTime="2025-01-29 11:11:35.785503153 +0000 UTC m=+352.092438475" Jan 29 11:11:35.807755 kubelet[2792]: E0129 11:11:35.805898 2792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-9dq5h" podUID="fd4f8c42-4acb-4426-8cf8-854be01785d8" Jan 29 11:11:36.280384 systemd[1]: run-containerd-runc-k8s.io-0a3ad86e4fd1daea7bb3a76e0aa25e82cc140df51c80360df1cef6174e0a67c3-runc.pdkIwo.mount: Deactivated successfully. Jan 29 11:11:37.806981 kubelet[2792]: E0129 11:11:37.806277 2792 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-668d6bf9bc-9dq5h" podUID="fd4f8c42-4acb-4426-8cf8-854be01785d8" Jan 29 11:11:38.102423 systemd-networkd[1376]: lxc_health: Link UP Jan 29 11:11:38.114845 systemd-networkd[1376]: lxc_health: Gained carrier Jan 29 11:11:38.136270 kubelet[2792]: I0129 11:11:38.135972 2792 setters.go:602] "Node became not ready" node="ci-4152-2-0-3-44dff38e5d" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:11:38Z","lastTransitionTime":"2025-01-29T11:11:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:11:39.368906 systemd-networkd[1376]: lxc_health: Gained IPv6LL Jan 29 11:11:40.640531 systemd[1]: run-containerd-runc-k8s.io-0a3ad86e4fd1daea7bb3a76e0aa25e82cc140df51c80360df1cef6174e0a67c3-runc.pUpk80.mount: Deactivated successfully. Jan 29 11:11:43.851685 kubelet[2792]: I0129 11:11:43.851516 2792 scope.go:117] "RemoveContainer" containerID="883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036" Jan 29 11:11:43.852948 containerd[1512]: time="2025-01-29T11:11:43.852856463Z" level=info msg="RemoveContainer for \"883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036\"" Jan 29 11:11:43.857577 containerd[1512]: time="2025-01-29T11:11:43.857542143Z" level=info msg="RemoveContainer for \"883f0e9f6ac37653a05889abe6456cadf7047a86386c6206c5dedda1fba15036\" returns successfully" Jan 29 11:11:43.857973 kubelet[2792]: I0129 11:11:43.857761 2792 scope.go:117] "RemoveContainer" containerID="df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47" Jan 29 11:11:43.859240 containerd[1512]: time="2025-01-29T11:11:43.859207918Z" level=info msg="RemoveContainer for \"df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47\"" Jan 29 11:11:43.862375 containerd[1512]: time="2025-01-29T11:11:43.862345385Z" level=info msg="RemoveContainer for \"df2e18c7c7271b584c61858a826e0657bfe34fda843985c578f3c9fed85c9b47\" returns successfully" Jan 29 11:11:43.862640 kubelet[2792]: I0129 11:11:43.862584 2792 scope.go:117] "RemoveContainer" containerID="93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565" Jan 29 11:11:43.863675 containerd[1512]: time="2025-01-29T11:11:43.863581676Z" level=info msg="RemoveContainer for \"93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565\"" Jan 29 11:11:43.867390 containerd[1512]: time="2025-01-29T11:11:43.867358829Z" level=info msg="RemoveContainer for \"93e067d9455172bf49ceeaf0fb99ce22c1a8b0977a8ade1c821716ac4279a565\" returns successfully" Jan 29 11:11:43.867612 kubelet[2792]: I0129 11:11:43.867578 2792 scope.go:117] "RemoveContainer" containerID="18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe" Jan 29 11:11:43.868839 containerd[1512]: time="2025-01-29T11:11:43.868789681Z" level=info msg="RemoveContainer for \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\"" Jan 29 11:11:43.872119 containerd[1512]: time="2025-01-29T11:11:43.871946269Z" level=info msg="RemoveContainer for \"18aef1e50588cf41b48892f2734b9ee4bf06c93ded57f27fecc10666e1251dbe\" returns successfully" Jan 29 11:11:43.872201 kubelet[2792]: I0129 11:11:43.872129 2792 scope.go:117] "RemoveContainer" containerID="64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e" Jan 29 11:11:43.874162 containerd[1512]: time="2025-01-29T11:11:43.874060727Z" level=info msg="RemoveContainer for \"64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e\"" Jan 29 11:11:43.876898 containerd[1512]: time="2025-01-29T11:11:43.876873631Z" level=info msg="RemoveContainer for \"64719dc4165a781ffeeac68a364da4b5e4b2a204f8be11a952b8d17b1463ea7e\" returns successfully" Jan 29 11:11:43.878194 containerd[1512]: time="2025-01-29T11:11:43.878173283Z" level=info msg="StopPodSandbox for \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\"" Jan 29 11:11:43.878268 containerd[1512]: time="2025-01-29T11:11:43.878241803Z" level=info msg="TearDown network for sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" successfully" Jan 29 11:11:43.878268 containerd[1512]: time="2025-01-29T11:11:43.878253083Z" level=info msg="StopPodSandbox for \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" returns successfully" Jan 29 11:11:43.878667 containerd[1512]: time="2025-01-29T11:11:43.878641407Z" level=info msg="RemovePodSandbox for \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\"" Jan 29 11:11:43.878749 containerd[1512]: time="2025-01-29T11:11:43.878684247Z" level=info msg="Forcibly stopping sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\"" Jan 29 11:11:43.878815 containerd[1512]: time="2025-01-29T11:11:43.878749728Z" level=info msg="TearDown network for sandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" successfully" Jan 29 11:11:43.882162 containerd[1512]: time="2025-01-29T11:11:43.882130717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:11:43.882224 containerd[1512]: time="2025-01-29T11:11:43.882184837Z" level=info msg="RemovePodSandbox \"e00a249d7f4b34991762f4f7d2e8d11751d8d98b24b95b6d15681e35f339b56f\" returns successfully" Jan 29 11:11:43.882744 containerd[1512]: time="2025-01-29T11:11:43.882719202Z" level=info msg="StopPodSandbox for \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\"" Jan 29 11:11:43.882821 containerd[1512]: time="2025-01-29T11:11:43.882787083Z" level=info msg="TearDown network for sandbox \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\" successfully" Jan 29 11:11:43.882821 containerd[1512]: time="2025-01-29T11:11:43.882797243Z" level=info msg="StopPodSandbox for \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\" returns successfully" Jan 29 11:11:43.883106 containerd[1512]: time="2025-01-29T11:11:43.883074805Z" level=info msg="RemovePodSandbox for \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\"" Jan 29 11:11:43.883106 containerd[1512]: time="2025-01-29T11:11:43.883100245Z" level=info msg="Forcibly stopping sandbox \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\"" Jan 29 11:11:43.883174 containerd[1512]: time="2025-01-29T11:11:43.883146526Z" level=info msg="TearDown network for sandbox \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\" successfully" Jan 29 11:11:43.888394 containerd[1512]: time="2025-01-29T11:11:43.888262690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:11:43.888394 containerd[1512]: time="2025-01-29T11:11:43.888315291Z" level=info msg="RemovePodSandbox \"d048ddfd1e444cbef7f6f5eb90ba84888b3a48fe56bffa80cf8ffc10b12b58b7\" returns successfully" Jan 29 11:11:45.130989 sshd[4786]: Connection closed by 147.75.109.163 port 54200 Jan 29 11:11:45.131954 sshd-session[4725]: pam_unix(sshd:session): session closed for user core Jan 29 11:11:45.136308 systemd[1]: sshd@23-168.119.110.78:22-147.75.109.163:54200.service: Deactivated successfully. Jan 29 11:11:45.141569 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:11:45.145245 systemd-logind[1493]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:11:45.146653 systemd-logind[1493]: Removed session 23.