Jan 29 11:08:05.887368 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:08:05.887389 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:08:05.887399 kernel: KASLR enabled Jan 29 11:08:05.887405 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 11:08:05.887411 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Jan 29 11:08:05.887416 kernel: random: crng init done Jan 29 11:08:05.887423 kernel: secureboot: Secure boot disabled Jan 29 11:08:05.887429 kernel: ACPI: Early table checksum verification disabled Jan 29 11:08:05.887435 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 11:08:05.887443 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:08:05.887449 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887455 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887461 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887467 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887474 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887482 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887488 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887494 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887500 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:05.887506 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 11:08:05.887513 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 11:08:05.887527 kernel: NUMA: Failed to initialise from firmware Jan 29 11:08:05.887534 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:08:05.887540 kernel: NUMA: NODE_DATA [mem 0x139670800-0x139675fff] Jan 29 11:08:05.887546 kernel: Zone ranges: Jan 29 11:08:05.887554 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 11:08:05.887560 kernel: DMA32 empty Jan 29 11:08:05.887566 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 11:08:05.887573 kernel: Movable zone start for each node Jan 29 11:08:05.887579 kernel: Early memory node ranges Jan 29 11:08:05.887585 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jan 29 11:08:05.887591 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 11:08:05.887597 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 11:08:05.887603 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 11:08:05.887609 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 11:08:05.887616 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 11:08:05.887622 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 11:08:05.887629 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 11:08:05.887635 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 11:08:05.887642 kernel: psci: probing for conduit method from ACPI. Jan 29 11:08:05.887650 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:08:05.887657 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:08:05.887664 kernel: psci: Trusted OS migration not required Jan 29 11:08:05.887672 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:08:05.887678 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:08:05.887685 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:08:05.887692 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:08:05.887698 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 11:08:05.887705 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:08:05.887711 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:08:05.887718 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:08:05.887725 kernel: CPU features: detected: Spectre-v4 Jan 29 11:08:05.887731 kernel: CPU features: detected: Spectre-BHB Jan 29 11:08:05.887739 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:08:05.887746 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:08:05.887753 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:08:05.887759 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:08:05.887766 kernel: alternatives: applying boot alternatives Jan 29 11:08:05.887773 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:08:05.887780 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:08:05.887787 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:08:05.887793 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:08:05.887800 kernel: Fallback order for Node 0: 0 Jan 29 11:08:05.887806 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 11:08:05.887814 kernel: Policy zone: Normal Jan 29 11:08:05.887822 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:08:05.887828 kernel: software IO TLB: area num 2. Jan 29 11:08:05.887835 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 11:08:05.887842 kernel: Memory: 3882684K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 213316K reserved, 0K cma-reserved) Jan 29 11:08:05.887848 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:08:05.887855 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:08:05.887862 kernel: rcu: RCU event tracing is enabled. Jan 29 11:08:05.887869 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:08:05.887875 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:08:05.887938 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:08:05.887946 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:08:05.887955 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:08:05.887962 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:08:05.887968 kernel: GICv3: 256 SPIs implemented Jan 29 11:08:05.887975 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:08:05.887981 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:08:05.887988 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:08:05.887994 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:08:05.888001 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:08:05.888008 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:08:05.888014 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:08:05.888021 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 11:08:05.888029 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 11:08:05.888036 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:08:05.888042 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:08:05.888049 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:08:05.888056 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:08:05.888062 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:08:05.888069 kernel: Console: colour dummy device 80x25 Jan 29 11:08:05.888076 kernel: ACPI: Core revision 20230628 Jan 29 11:08:05.888083 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:08:05.888089 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:08:05.888098 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:08:05.888105 kernel: landlock: Up and running. Jan 29 11:08:05.888111 kernel: SELinux: Initializing. Jan 29 11:08:05.888118 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:08:05.888125 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:08:05.888131 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:08:05.888138 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:08:05.888145 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:08:05.888152 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:08:05.888158 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:08:05.888166 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:08:05.888173 kernel: Remapping and enabling EFI services. Jan 29 11:08:05.888180 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:08:05.888186 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:08:05.888193 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:08:05.888200 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 11:08:05.888207 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:08:05.888215 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:08:05.888223 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:08:05.888233 kernel: SMP: Total of 2 processors activated. Jan 29 11:08:05.888241 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:08:05.888255 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:08:05.888264 kernel: CPU features: detected: Common not Private translations Jan 29 11:08:05.888271 kernel: CPU features: detected: CRC32 instructions Jan 29 11:08:05.888278 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:08:05.888285 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:08:05.888292 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:08:05.888299 kernel: CPU features: detected: Privileged Access Never Jan 29 11:08:05.888308 kernel: CPU features: detected: RAS Extension Support Jan 29 11:08:05.888315 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:08:05.888327 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:08:05.888334 kernel: alternatives: applying system-wide alternatives Jan 29 11:08:05.888341 kernel: devtmpfs: initialized Jan 29 11:08:05.888348 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:08:05.888355 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:08:05.888363 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:08:05.888371 kernel: SMBIOS 3.0.0 present. Jan 29 11:08:05.888378 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 11:08:05.888386 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:08:05.888393 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:08:05.888400 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:08:05.888407 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:08:05.888414 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:08:05.888421 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Jan 29 11:08:05.888428 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:08:05.888437 kernel: cpuidle: using governor menu Jan 29 11:08:05.888444 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:08:05.888455 kernel: ASID allocator initialised with 32768 entries Jan 29 11:08:05.888462 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:08:05.888469 kernel: Serial: AMBA PL011 UART driver Jan 29 11:08:05.888476 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:08:05.888483 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:08:05.888490 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:08:05.888498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:08:05.888507 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:08:05.888514 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:08:05.888521 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:08:05.888528 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:08:05.888535 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:08:05.888542 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:08:05.888549 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:08:05.888556 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:08:05.888563 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:08:05.888572 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:08:05.888579 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:08:05.888586 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:08:05.888593 kernel: ACPI: Interpreter enabled Jan 29 11:08:05.888600 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:08:05.888608 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:08:05.888615 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:08:05.888622 kernel: printk: console [ttyAMA0] enabled Jan 29 11:08:05.888629 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:08:05.888777 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:08:05.888850 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:08:05.888976 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:08:05.889045 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:08:05.889107 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:08:05.889117 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:08:05.889124 kernel: PCI host bridge to bus 0000:00 Jan 29 11:08:05.889201 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:08:05.889260 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:08:05.889318 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:08:05.889379 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:08:05.889457 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:08:05.889534 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 11:08:05.889604 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 11:08:05.889676 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:08:05.889755 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.889821 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 11:08:05.889920 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.890023 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 11:08:05.890101 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.890170 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 11:08:05.890242 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.890307 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 11:08:05.890379 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.890445 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 11:08:05.890519 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.890584 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 11:08:05.890655 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.890720 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 11:08:05.890953 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.891034 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 11:08:05.891109 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 11:08:05.891183 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 11:08:05.891260 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 11:08:05.891326 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 11:08:05.891401 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:08:05.891469 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 11:08:05.891536 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:08:05.891606 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:08:05.891680 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 11:08:05.891748 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 11:08:05.891821 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 11:08:05.891913 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 11:08:05.892021 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 11:08:05.892099 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 11:08:05.892173 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 11:08:05.892247 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 11:08:05.892316 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 11:08:05.892383 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 11:08:05.892457 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 11:08:05.892524 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 11:08:05.892595 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:08:05.892669 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 11:08:05.892736 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 11:08:05.892804 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 11:08:05.892872 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 11:08:05.893061 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 11:08:05.893137 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:08:05.893201 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 11:08:05.893267 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 11:08:05.893330 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 11:08:05.893393 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 11:08:05.893459 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 11:08:05.893524 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:08:05.893591 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 11:08:05.893673 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 11:08:05.893737 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 11:08:05.893800 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 11:08:05.893866 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 11:08:05.893990 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:08:05.894060 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 11:08:05.894128 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:08:05.894198 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:08:05.894263 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 11:08:05.894330 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:08:05.894393 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:08:05.894465 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 11:08:05.894533 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:08:05.894597 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:08:05.894664 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 11:08:05.894731 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:08:05.894795 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:08:05.894859 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 11:08:05.894947 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 11:08:05.895018 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:08:05.895083 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 11:08:05.895151 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:08:05.895223 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 11:08:05.895289 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:08:05.895353 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 11:08:05.895418 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:08:05.895483 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 11:08:05.895546 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:08:05.895614 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 11:08:05.895680 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:08:05.895744 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 11:08:05.895809 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:08:05.895873 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 11:08:05.895984 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:08:05.896055 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 11:08:05.896124 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:08:05.896195 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 11:08:05.896269 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 11:08:05.896349 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 11:08:05.896415 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 11:08:05.896481 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 11:08:05.896547 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 11:08:05.896612 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 11:08:05.896679 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 11:08:05.896744 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 11:08:05.896808 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 11:08:05.896872 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 11:08:05.897408 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 11:08:05.897495 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 11:08:05.897563 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 11:08:05.897629 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 11:08:05.897708 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 11:08:05.898075 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 11:08:05.898155 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 11:08:05.898220 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 11:08:05.898284 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 11:08:05.898352 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 11:08:05.898425 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 11:08:05.898493 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:08:05.898566 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 11:08:05.898633 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 11:08:05.898705 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 11:08:05.898774 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 11:08:05.898856 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:08:05.898971 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 11:08:05.899050 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 11:08:05.899115 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 11:08:05.899179 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 11:08:05.899252 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:08:05.899325 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 11:08:05.899394 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 11:08:05.899463 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 11:08:05.899535 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 11:08:05.899615 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 11:08:05.899682 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:08:05.899754 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 11:08:05.899820 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 11:08:05.899912 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 11:08:05.899996 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 11:08:05.900067 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:08:05.900140 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 11:08:05.900211 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 11:08:05.900284 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 11:08:05.900349 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 11:08:05.900418 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 11:08:05.900485 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:08:05.900558 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 11:08:05.900631 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 11:08:05.900705 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 11:08:05.900771 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 11:08:05.900839 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 11:08:05.903266 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:08:05.903389 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 11:08:05.903461 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 11:08:05.903531 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 11:08:05.903614 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 11:08:05.903689 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 11:08:05.903756 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 11:08:05.903820 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:08:05.903981 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 11:08:05.904068 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 11:08:05.904143 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 11:08:05.904217 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:08:05.904283 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 11:08:05.904346 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 11:08:05.904420 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 11:08:05.904498 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:08:05.904573 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:08:05.904632 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:08:05.904689 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:08:05.904768 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 11:08:05.904828 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 11:08:05.904974 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 11:08:05.905058 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 11:08:05.905118 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 11:08:05.905177 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 11:08:05.905244 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 11:08:05.905311 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 11:08:05.905392 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 11:08:05.905465 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 11:08:05.905525 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 11:08:05.905585 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 11:08:05.905652 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 11:08:05.905714 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 11:08:05.905773 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 11:08:05.905840 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 11:08:05.906074 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 11:08:05.906152 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 11:08:05.906219 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 11:08:05.906279 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 11:08:05.906341 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 11:08:05.906407 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 11:08:05.906467 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 11:08:05.906528 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 11:08:05.906600 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 11:08:05.906671 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 11:08:05.906731 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 11:08:05.906741 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:08:05.906749 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:08:05.906757 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:08:05.906765 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:08:05.906773 kernel: iommu: Default domain type: Translated Jan 29 11:08:05.906783 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:08:05.906791 kernel: efivars: Registered efivars operations Jan 29 11:08:05.906798 kernel: vgaarb: loaded Jan 29 11:08:05.906806 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:08:05.906813 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:08:05.906823 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:08:05.906830 kernel: pnp: PnP ACPI init Jan 29 11:08:05.906919 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:08:05.906944 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:08:05.906952 kernel: NET: Registered PF_INET protocol family Jan 29 11:08:05.906960 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:08:05.906968 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:08:05.906980 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:08:05.906988 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:08:05.907000 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:08:05.907008 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:08:05.907015 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:08:05.907025 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:08:05.907033 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:08:05.907114 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 11:08:05.907126 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:08:05.907133 kernel: kvm [1]: HYP mode not available Jan 29 11:08:05.907146 kernel: Initialise system trusted keyrings Jan 29 11:08:05.907153 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:08:05.907161 kernel: Key type asymmetric registered Jan 29 11:08:05.907169 kernel: Asymmetric key parser 'x509' registered Jan 29 11:08:05.907179 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:08:05.907187 kernel: io scheduler mq-deadline registered Jan 29 11:08:05.907194 kernel: io scheduler kyber registered Jan 29 11:08:05.907202 kernel: io scheduler bfq registered Jan 29 11:08:05.907210 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 11:08:05.907278 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 11:08:05.907350 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 11:08:05.907414 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.907483 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 11:08:05.907548 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 11:08:05.907626 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.907695 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 11:08:05.907761 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 11:08:05.907830 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.910017 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 11:08:05.910109 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 11:08:05.910182 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.910257 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 11:08:05.910330 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 11:08:05.910397 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.910475 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 11:08:05.910553 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 11:08:05.910621 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.910690 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 11:08:05.910755 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 11:08:05.910821 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.910903 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 11:08:05.910982 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 11:08:05.911054 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.911065 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 11:08:05.911135 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 11:08:05.911208 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 11:08:05.911282 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 11:08:05.911293 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:08:05.911301 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:08:05.911308 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:08:05.911379 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 11:08:05.911455 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 11:08:05.911466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:08:05.911474 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 11:08:05.911543 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 11:08:05.911554 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 11:08:05.911562 kernel: thunder_xcv, ver 1.0 Jan 29 11:08:05.911569 kernel: thunder_bgx, ver 1.0 Jan 29 11:08:05.911577 kernel: nicpf, ver 1.0 Jan 29 11:08:05.911584 kernel: nicvf, ver 1.0 Jan 29 11:08:05.911662 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:08:05.911724 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:08:05 UTC (1738148885) Jan 29 11:08:05.911742 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:08:05.911750 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:08:05.911758 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:08:05.911765 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:08:05.911773 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:08:05.911780 kernel: Segment Routing with IPv6 Jan 29 11:08:05.911788 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:08:05.911795 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:08:05.911803 kernel: Key type dns_resolver registered Jan 29 11:08:05.911811 kernel: registered taskstats version 1 Jan 29 11:08:05.911820 kernel: Loading compiled-in X.509 certificates Jan 29 11:08:05.911828 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:08:05.911835 kernel: Key type .fscrypt registered Jan 29 11:08:05.911843 kernel: Key type fscrypt-provisioning registered Jan 29 11:08:05.911850 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:08:05.911858 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:08:05.911866 kernel: ima: No architecture policies found Jan 29 11:08:05.911873 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:08:05.912119 kernel: clk: Disabling unused clocks Jan 29 11:08:05.912132 kernel: Freeing unused kernel memory: 39680K Jan 29 11:08:05.912139 kernel: Run /init as init process Jan 29 11:08:05.912147 kernel: with arguments: Jan 29 11:08:05.912155 kernel: /init Jan 29 11:08:05.912162 kernel: with environment: Jan 29 11:08:05.912169 kernel: HOME=/ Jan 29 11:08:05.912177 kernel: TERM=linux Jan 29 11:08:05.912184 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:08:05.912194 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:08:05.912208 systemd[1]: Detected virtualization kvm. Jan 29 11:08:05.912216 systemd[1]: Detected architecture arm64. Jan 29 11:08:05.912233 systemd[1]: Running in initrd. Jan 29 11:08:05.912241 systemd[1]: No hostname configured, using default hostname. Jan 29 11:08:05.912249 systemd[1]: Hostname set to . Jan 29 11:08:05.912257 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:08:05.912267 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:08:05.912276 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:05.912284 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:05.912292 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:08:05.912300 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:08:05.912308 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:08:05.912316 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:08:05.912328 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:08:05.912336 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:08:05.912344 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:05.912352 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:05.912360 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:08:05.912368 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:08:05.912376 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:08:05.912384 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:08:05.912392 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:08:05.912402 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:08:05.912410 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:08:05.912418 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:08:05.912426 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:05.912434 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:05.912443 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:05.912457 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:08:05.912465 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:08:05.912475 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:08:05.912483 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:08:05.912491 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:08:05.912499 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:08:05.912507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:08:05.912515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:05.912523 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:08:05.912556 systemd-journald[237]: Collecting audit messages is disabled. Jan 29 11:08:05.912579 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:05.912587 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:08:05.912597 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:08:05.912606 systemd-journald[237]: Journal started Jan 29 11:08:05.912625 systemd-journald[237]: Runtime Journal (/run/log/journal/5a3a168230fa41559e5808773faf25db) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:08:05.914361 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:08:05.916498 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 11:08:05.921251 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:05.925258 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:08:05.932977 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:08:05.934960 kernel: Bridge firewalling registered Jan 29 11:08:05.934324 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 11:08:05.937092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:05.941013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:08:05.945091 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:08:05.946545 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:05.956275 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:08:05.957058 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:05.967362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:05.970029 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:05.971409 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:05.979101 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:08:05.983161 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:08:05.994140 dracut-cmdline[274]: dracut-dracut-053 Jan 29 11:08:05.997353 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:08:06.010633 systemd-resolved[275]: Positive Trust Anchors: Jan 29 11:08:06.010705 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:08:06.010737 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:08:06.016013 systemd-resolved[275]: Defaulting to hostname 'linux'. Jan 29 11:08:06.017606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:08:06.018305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:06.084939 kernel: SCSI subsystem initialized Jan 29 11:08:06.089923 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:08:06.096974 kernel: iscsi: registered transport (tcp) Jan 29 11:08:06.111410 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:08:06.111492 kernel: QLogic iSCSI HBA Driver Jan 29 11:08:06.159910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:08:06.166084 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:08:06.185035 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:08:06.185135 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:08:06.185961 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:08:06.237999 kernel: raid6: neonx8 gen() 15439 MB/s Jan 29 11:08:06.254960 kernel: raid6: neonx4 gen() 15200 MB/s Jan 29 11:08:06.271944 kernel: raid6: neonx2 gen() 12996 MB/s Jan 29 11:08:06.288978 kernel: raid6: neonx1 gen() 10289 MB/s Jan 29 11:08:06.305963 kernel: raid6: int64x8 gen() 6785 MB/s Jan 29 11:08:06.322967 kernel: raid6: int64x4 gen() 7163 MB/s Jan 29 11:08:06.339973 kernel: raid6: int64x2 gen() 6021 MB/s Jan 29 11:08:06.357009 kernel: raid6: int64x1 gen() 4967 MB/s Jan 29 11:08:06.357117 kernel: raid6: using algorithm neonx8 gen() 15439 MB/s Jan 29 11:08:06.373969 kernel: raid6: .... xor() 11714 MB/s, rmw enabled Jan 29 11:08:06.374054 kernel: raid6: using neon recovery algorithm Jan 29 11:08:06.379079 kernel: xor: measuring software checksum speed Jan 29 11:08:06.379145 kernel: 8regs : 19487 MB/sec Jan 29 11:08:06.380206 kernel: 32regs : 18692 MB/sec Jan 29 11:08:06.380241 kernel: arm64_neon : 26945 MB/sec Jan 29 11:08:06.380269 kernel: xor: using function: arm64_neon (26945 MB/sec) Jan 29 11:08:06.430946 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:08:06.444958 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:08:06.450119 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:06.473911 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 29 11:08:06.477372 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:06.485133 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:08:06.500067 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Jan 29 11:08:06.535586 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:08:06.544205 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:08:06.596124 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:06.603187 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:08:06.622600 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:08:06.624368 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:08:06.626294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:06.628234 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:08:06.638139 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:08:06.655793 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:08:06.682904 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:08:06.683150 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 11:08:06.683940 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 11:08:06.687046 kernel: ACPI: bus type USB registered Jan 29 11:08:06.687085 kernel: usbcore: registered new interface driver usbfs Jan 29 11:08:06.688108 kernel: usbcore: registered new interface driver hub Jan 29 11:08:06.688133 kernel: usbcore: registered new device driver usb Jan 29 11:08:06.726919 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 11:08:06.741904 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 11:08:06.742064 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:08:06.742076 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:08:06.727445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:08:06.727554 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:06.729107 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:06.729642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:06.729777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:06.735705 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:06.746492 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:06.756193 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:08:06.768250 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 11:08:06.768375 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 11:08:06.768502 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 11:08:06.768591 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 11:08:06.768676 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 11:08:06.768760 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:08:06.768842 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 11:08:06.769004 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:08:06.769016 kernel: GPT:17805311 != 80003071 Jan 29 11:08:06.769031 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:08:06.769041 kernel: GPT:17805311 != 80003071 Jan 29 11:08:06.769050 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:08:06.769060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:08:06.769070 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 11:08:06.769171 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 11:08:06.769258 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 11:08:06.769341 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 11:08:06.769424 kernel: hub 1-0:1.0: USB hub found Jan 29 11:08:06.769531 kernel: hub 1-0:1.0: 4 ports detected Jan 29 11:08:06.769614 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 11:08:06.769712 kernel: hub 2-0:1.0: USB hub found Jan 29 11:08:06.769800 kernel: hub 2-0:1.0: 4 ports detected Jan 29 11:08:06.767228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:06.779123 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:06.803643 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:06.827765 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (526) Jan 29 11:08:06.828906 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (514) Jan 29 11:08:06.833690 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 11:08:06.840732 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 11:08:06.850535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 11:08:06.851244 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 11:08:06.859140 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:08:06.865784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:08:06.870385 disk-uuid[572]: Primary Header is updated. Jan 29 11:08:06.870385 disk-uuid[572]: Secondary Entries is updated. Jan 29 11:08:06.870385 disk-uuid[572]: Secondary Header is updated. Jan 29 11:08:07.006959 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 11:08:07.249931 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 11:08:07.385427 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 11:08:07.385485 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 11:08:07.385991 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 11:08:07.440353 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 11:08:07.440682 kernel: usbcore: registered new interface driver usbhid Jan 29 11:08:07.440749 kernel: usbhid: USB HID core driver Jan 29 11:08:07.889931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:08:07.891614 disk-uuid[574]: The operation has completed successfully. Jan 29 11:08:07.959809 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:08:07.960695 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:08:07.971123 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:08:07.988445 sh[583]: Success Jan 29 11:08:08.001244 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:08:08.049476 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:08:08.059186 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:08:08.062968 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:08:08.074635 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:08:08.074695 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:08:08.074707 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:08:08.074718 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:08:08.074727 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:08:08.079907 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:08:08.081628 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:08:08.082976 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:08:08.089184 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:08:08.093771 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:08:08.105479 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:08:08.105538 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:08:08.105555 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:08:08.109965 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:08:08.110030 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:08:08.119553 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:08:08.120272 kernel: BTRFS info (device sda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:08:08.128050 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:08:08.132620 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:08:08.198254 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:08:08.205235 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:08:08.233819 systemd-networkd[765]: lo: Link UP Jan 29 11:08:08.235947 systemd-networkd[765]: lo: Gained carrier Jan 29 11:08:08.237682 systemd-networkd[765]: Enumeration completed Jan 29 11:08:08.237828 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:08:08.238202 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:08.238205 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:08:08.239101 systemd[1]: Reached target network.target - Network. Jan 29 11:08:08.240433 systemd-networkd[765]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:08.240436 systemd-networkd[765]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:08:08.241250 systemd-networkd[765]: eth0: Link UP Jan 29 11:08:08.241253 systemd-networkd[765]: eth0: Gained carrier Jan 29 11:08:08.241260 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:08.247274 systemd-networkd[765]: eth1: Link UP Jan 29 11:08:08.247277 systemd-networkd[765]: eth1: Gained carrier Jan 29 11:08:08.247287 systemd-networkd[765]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:08.252549 ignition[679]: Ignition 2.20.0 Jan 29 11:08:08.252565 ignition[679]: Stage: fetch-offline Jan 29 11:08:08.255129 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:08:08.252618 ignition[679]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:08.252627 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:08:08.252803 ignition[679]: parsed url from cmdline: "" Jan 29 11:08:08.252806 ignition[679]: no config URL provided Jan 29 11:08:08.252811 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:08:08.252819 ignition[679]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:08:08.252824 ignition[679]: failed to fetch config: resource requires networking Jan 29 11:08:08.253074 ignition[679]: Ignition finished successfully Jan 29 11:08:08.264238 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:08:08.279510 ignition[775]: Ignition 2.20.0 Jan 29 11:08:08.279522 ignition[775]: Stage: fetch Jan 29 11:08:08.279690 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:08.279699 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:08:08.279791 ignition[775]: parsed url from cmdline: "" Jan 29 11:08:08.279794 ignition[775]: no config URL provided Jan 29 11:08:08.279799 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:08:08.279807 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:08:08.279909 ignition[775]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 11:08:08.280713 ignition[775]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 11:08:08.287973 systemd-networkd[765]: eth0: DHCPv4 address 138.199.151.137/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:08:08.333991 systemd-networkd[765]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:08:08.481015 ignition[775]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 11:08:08.487101 ignition[775]: GET result: OK Jan 29 11:08:08.487215 ignition[775]: parsing config with SHA512: 216d6fcf48f58ba3cadb55c26fb9419d59708f8ea7ab5e51b95030abf3c320d3f231dc3c7a07cfdbb5e13da4ab084142f4315b9039b931f2e819641b27f70711 Jan 29 11:08:08.494286 unknown[775]: fetched base config from "system" Jan 29 11:08:08.494296 unknown[775]: fetched base config from "system" Jan 29 11:08:08.495196 ignition[775]: fetch: fetch complete Jan 29 11:08:08.494301 unknown[775]: fetched user config from "hetzner" Jan 29 11:08:08.495202 ignition[775]: fetch: fetch passed Jan 29 11:08:08.497449 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:08:08.495258 ignition[775]: Ignition finished successfully Jan 29 11:08:08.502190 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:08:08.529151 ignition[782]: Ignition 2.20.0 Jan 29 11:08:08.529161 ignition[782]: Stage: kargs Jan 29 11:08:08.529330 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:08.529339 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:08:08.530326 ignition[782]: kargs: kargs passed Jan 29 11:08:08.530382 ignition[782]: Ignition finished successfully Jan 29 11:08:08.534967 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:08:08.541059 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:08:08.553468 ignition[789]: Ignition 2.20.0 Jan 29 11:08:08.553479 ignition[789]: Stage: disks Jan 29 11:08:08.553660 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:08.556838 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:08:08.553670 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:08:08.558822 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:08:08.554731 ignition[789]: disks: disks passed Jan 29 11:08:08.554782 ignition[789]: Ignition finished successfully Jan 29 11:08:08.560942 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:08:08.562254 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:08:08.563486 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:08:08.564490 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:08:08.573312 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:08:08.590871 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:08:08.594108 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:08:08.599075 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:08:08.656950 kernel: EXT4-fs (sda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:08:08.658454 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:08:08.660055 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:08:08.666016 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:08:08.670087 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:08:08.673099 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:08:08.677017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:08:08.680044 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (805) Jan 29 11:08:08.678799 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:08:08.682649 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:08:08.684615 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:08:08.684649 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:08:08.684659 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:08:08.686944 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:08:08.686993 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:08:08.692232 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:08:08.696080 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:08:08.753958 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:08:08.757002 coreos-metadata[807]: Jan 29 11:08:08.756 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 11:08:08.760929 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:08:08.762046 coreos-metadata[807]: Jan 29 11:08:08.760 INFO Fetch successful Jan 29 11:08:08.762046 coreos-metadata[807]: Jan 29 11:08:08.760 INFO wrote hostname ci-4152-2-0-b-e71ed2fe96 to /sysroot/etc/hostname Jan 29 11:08:08.762254 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:08:08.768783 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:08:08.773955 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:08:08.877301 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:08:08.886113 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:08:08.890905 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:08:08.896916 kernel: BTRFS info (device sda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:08:08.920471 ignition[923]: INFO : Ignition 2.20.0 Jan 29 11:08:08.920471 ignition[923]: INFO : Stage: mount Jan 29 11:08:08.922361 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:08.922361 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:08:08.922361 ignition[923]: INFO : mount: mount passed Jan 29 11:08:08.922361 ignition[923]: INFO : Ignition finished successfully Jan 29 11:08:08.924150 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:08:08.928040 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:08:08.929403 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:08:09.074707 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:08:09.084266 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:08:09.094963 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (934) Jan 29 11:08:09.096226 kernel: BTRFS info (device sda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:08:09.096270 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:08:09.096296 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:08:09.101090 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:08:09.101172 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:08:09.105480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:08:09.137576 ignition[951]: INFO : Ignition 2.20.0 Jan 29 11:08:09.137576 ignition[951]: INFO : Stage: files Jan 29 11:08:09.138960 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:09.138960 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:08:09.138960 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:08:09.142076 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:08:09.142076 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:08:09.144026 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:08:09.144026 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:08:09.144026 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:08:09.143378 unknown[951]: wrote ssh authorized keys file for user: core Jan 29 11:08:09.146877 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:08:09.146877 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:08:09.240775 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:08:09.386422 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:08:09.386422 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:08:09.386422 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jan 29 11:08:09.723208 systemd-networkd[765]: eth1: Gained IPv6LL Jan 29 11:08:09.968052 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:08:09.980411 systemd-networkd[765]: eth0: Gained IPv6LL Jan 29 11:08:10.059033 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:08:10.060240 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:08:10.071029 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:08:10.071029 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:08:10.071029 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 29 11:08:10.441503 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:08:10.717662 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 29 11:08:10.717662 ignition[951]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:08:10.720305 ignition[951]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:08:10.720305 ignition[951]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:08:10.720305 ignition[951]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:08:10.720305 ignition[951]: INFO : files: files passed Jan 29 11:08:10.720305 ignition[951]: INFO : Ignition finished successfully Jan 29 11:08:10.723168 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:08:10.735128 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:08:10.738404 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:08:10.742808 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:08:10.748130 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:08:10.758317 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:10.758317 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:10.760913 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:10.763184 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:08:10.765177 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:08:10.770062 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:08:10.805043 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:08:10.805203 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:08:10.807365 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:08:10.809149 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:08:10.809769 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:08:10.814272 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:08:10.830207 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:08:10.836126 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:08:10.849601 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:10.850557 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:10.852016 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:08:10.853173 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:08:10.853296 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:08:10.854720 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:08:10.855396 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:08:10.856431 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:08:10.857454 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:08:10.858454 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:08:10.859506 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:08:10.860529 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:08:10.861617 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:08:10.862550 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:08:10.863659 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:08:10.864566 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:08:10.864688 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:08:10.865858 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:10.866507 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:10.867652 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:08:10.867725 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:10.868718 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:08:10.868834 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:08:10.870303 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:08:10.870413 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:08:10.871534 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:08:10.871622 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:08:10.872740 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:08:10.872831 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:08:10.879180 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:08:10.883163 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:08:10.883614 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:08:10.883724 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:10.884660 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:08:10.884751 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:08:10.893451 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:08:10.894099 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:08:10.899449 ignition[1003]: INFO : Ignition 2.20.0 Jan 29 11:08:10.899449 ignition[1003]: INFO : Stage: umount Jan 29 11:08:10.900451 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:10.900451 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 11:08:10.902461 ignition[1003]: INFO : umount: umount passed Jan 29 11:08:10.902461 ignition[1003]: INFO : Ignition finished successfully Jan 29 11:08:10.905121 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:08:10.905268 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:08:10.910369 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:08:10.910860 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:08:10.911058 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:08:10.912380 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:08:10.912490 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:08:10.913136 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:08:10.913180 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:08:10.914017 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:08:10.914057 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:08:10.914836 systemd[1]: Stopped target network.target - Network. Jan 29 11:08:10.915632 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:08:10.915690 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:08:10.916598 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:08:10.917348 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:08:10.920971 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:10.922753 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:08:10.923628 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:08:10.924582 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:08:10.924632 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:08:10.925435 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:08:10.925471 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:08:10.926314 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:08:10.926367 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:08:10.927210 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:08:10.927250 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:08:10.928052 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:08:10.928090 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:08:10.929161 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:08:10.930018 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:08:10.936050 systemd-networkd[765]: eth0: DHCPv6 lease lost Jan 29 11:08:10.940554 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:08:10.940809 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:08:10.941293 systemd-networkd[765]: eth1: DHCPv6 lease lost Jan 29 11:08:10.945011 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:08:10.945202 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:08:10.947587 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:08:10.947685 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:10.955110 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:08:10.955864 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:08:10.955992 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:08:10.957258 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:08:10.957325 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:10.958472 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:08:10.958534 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:10.959812 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:08:10.959877 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:10.961410 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:10.973665 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:08:10.973787 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:08:10.977751 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:08:10.977985 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:10.979515 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:08:10.979560 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:10.980497 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:08:10.980536 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:10.982117 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:08:10.982181 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:08:10.983671 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:08:10.983724 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:08:10.985262 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:08:10.985316 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:10.995121 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:08:10.995821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:08:10.995938 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:10.999219 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:08:10.999280 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:08:11.000316 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:08:11.000357 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:11.004398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:11.004450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:11.006259 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:08:11.008341 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:08:11.009402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:08:11.019227 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:08:11.028620 systemd[1]: Switching root. Jan 29 11:08:11.066384 systemd-journald[237]: Journal stopped Jan 29 11:08:12.022356 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 29 11:08:12.022451 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:08:12.022469 kernel: SELinux: policy capability open_perms=1 Jan 29 11:08:12.022479 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:08:12.022489 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:08:12.022502 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:08:12.022516 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:08:12.022526 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:08:12.022536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:08:12.022545 kernel: audit: type=1403 audit(1738148891.243:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:08:12.022558 systemd[1]: Successfully loaded SELinux policy in 36.205ms. Jan 29 11:08:12.022987 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.228ms. Jan 29 11:08:12.023010 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:08:12.023023 systemd[1]: Detected virtualization kvm. Jan 29 11:08:12.023034 systemd[1]: Detected architecture arm64. Jan 29 11:08:12.023044 systemd[1]: Detected first boot. Jan 29 11:08:12.023055 systemd[1]: Hostname set to . Jan 29 11:08:12.023065 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:08:12.023079 zram_generator::config[1045]: No configuration found. Jan 29 11:08:12.023090 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:08:12.023101 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:08:12.023116 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:08:12.023126 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:08:12.023142 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:08:12.023152 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:08:12.023163 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:08:12.023175 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:08:12.023186 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:08:12.023197 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:08:12.023210 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:08:12.023221 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:08:12.023231 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:12.023242 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:12.023252 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:08:12.023263 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:08:12.023275 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:08:12.023286 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:08:12.023297 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:08:12.023308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:12.023318 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:08:12.023329 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:08:12.023341 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:08:12.023352 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:08:12.023363 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:12.023374 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:08:12.023384 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:08:12.023395 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:08:12.023405 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:08:12.023416 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:08:12.023426 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:12.023439 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:12.023451 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:12.023462 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:08:12.023473 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:08:12.023483 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:08:12.023494 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:08:12.023504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:08:12.023515 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:08:12.023532 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:08:12.023545 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:08:12.023561 systemd[1]: Reached target machines.target - Containers. Jan 29 11:08:12.023572 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:08:12.023582 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:12.023593 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:08:12.023603 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:08:12.023616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:12.023627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:08:12.023637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:12.023647 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:08:12.023658 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:12.023670 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:08:12.023682 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:08:12.023693 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:08:12.023705 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:08:12.023716 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:08:12.023727 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:08:12.023738 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:08:12.023748 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:08:12.023759 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:08:12.023769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:08:12.023781 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:08:12.023791 systemd[1]: Stopped verity-setup.service. Jan 29 11:08:12.023803 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:08:12.023813 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:08:12.023824 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:08:12.023834 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:08:12.023845 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:08:12.023856 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:08:12.023868 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:12.023879 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:08:12.025203 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:08:12.025227 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:12.025238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:12.025249 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:12.025261 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:12.025277 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:08:12.025288 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:08:12.025300 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:12.025310 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:08:12.025322 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:08:12.025332 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:08:12.025345 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:08:12.025356 kernel: loop: module loaded Jan 29 11:08:12.025368 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:08:12.025379 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:08:12.025391 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:08:12.025402 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:08:12.025446 systemd-journald[1115]: Collecting audit messages is disabled. Jan 29 11:08:12.025517 kernel: fuse: init (API version 7.39) Jan 29 11:08:12.025536 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:08:12.025547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:12.025559 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:08:12.025572 systemd-journald[1115]: Journal started Jan 29 11:08:12.025598 systemd-journald[1115]: Runtime Journal (/run/log/journal/5a3a168230fa41559e5808773faf25db) is 8.0M, max 76.6M, 68.6M free. Jan 29 11:08:11.740155 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:08:11.758411 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:08:11.759174 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:08:12.018134 systemd-tmpfiles[1132]: ACLs are not supported, ignoring. Jan 29 11:08:12.018145 systemd-tmpfiles[1132]: ACLs are not supported, ignoring. Jan 29 11:08:12.027960 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:08:12.039099 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:08:12.043453 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:08:12.048465 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:08:12.052720 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:08:12.055004 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:08:12.056388 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:08:12.056582 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:08:12.057621 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:12.057757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:12.060380 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:08:12.061623 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:08:12.089986 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:08:12.109069 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:08:12.120109 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:08:12.128333 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:08:12.144361 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:08:12.146919 kernel: ACPI: bus type drm_connector registered Jan 29 11:08:12.146946 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:08:12.149827 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:08:12.153989 systemd-journald[1115]: Time spent on flushing to /var/log/journal/5a3a168230fa41559e5808773faf25db is 82.834ms for 1135 entries. Jan 29 11:08:12.153989 systemd-journald[1115]: System Journal (/var/log/journal/5a3a168230fa41559e5808773faf25db) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:08:12.260256 kernel: loop0: detected capacity change from 0 to 113536 Jan 29 11:08:12.260355 systemd-journald[1115]: Received client request to flush runtime journal. Jan 29 11:08:12.260395 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:08:12.260412 kernel: loop1: detected capacity change from 0 to 116808 Jan 29 11:08:12.158192 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:08:12.161043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:12.170187 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:08:12.171103 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:08:12.171250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:08:12.184427 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:12.218667 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:08:12.250483 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:08:12.260754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:08:12.267980 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:08:12.272857 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:08:12.274933 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:08:12.296270 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 29 11:08:12.296577 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 29 11:08:12.300926 kernel: loop2: detected capacity change from 0 to 189592 Jan 29 11:08:12.301408 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:12.337928 kernel: loop3: detected capacity change from 0 to 8 Jan 29 11:08:12.361941 kernel: loop4: detected capacity change from 0 to 113536 Jan 29 11:08:12.378923 kernel: loop5: detected capacity change from 0 to 116808 Jan 29 11:08:12.401940 kernel: loop6: detected capacity change from 0 to 189592 Jan 29 11:08:12.425589 kernel: loop7: detected capacity change from 0 to 8 Jan 29 11:08:12.426769 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 11:08:12.429077 (sd-merge)[1187]: Merged extensions into '/usr'. Jan 29 11:08:12.432982 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:08:12.433111 systemd[1]: Reloading... Jan 29 11:08:12.524980 zram_generator::config[1210]: No configuration found. Jan 29 11:08:12.578747 ldconfig[1138]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:08:12.668649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:08:12.714036 systemd[1]: Reloading finished in 279 ms. Jan 29 11:08:12.751983 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:08:12.754835 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:08:12.764257 systemd[1]: Starting ensure-sysext.service... Jan 29 11:08:12.767763 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:08:12.776805 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:08:12.776821 systemd[1]: Reloading... Jan 29 11:08:12.815209 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:08:12.817249 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:08:12.818151 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:08:12.820005 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 29 11:08:12.820190 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 29 11:08:12.828562 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:08:12.831002 systemd-tmpfiles[1251]: Skipping /boot Jan 29 11:08:12.856311 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:08:12.857956 systemd-tmpfiles[1251]: Skipping /boot Jan 29 11:08:12.864994 zram_generator::config[1276]: No configuration found. Jan 29 11:08:12.961526 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:08:13.006566 systemd[1]: Reloading finished in 229 ms. Jan 29 11:08:13.029066 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:08:13.035478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:13.045182 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:08:13.054138 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:08:13.059423 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:08:13.063505 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:08:13.068233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:13.071637 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:08:13.077262 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:13.079139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:13.081261 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:13.085242 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:13.085985 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:13.092757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:08:13.097156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:13.097315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:13.102285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:13.109064 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:08:13.109938 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:13.114137 systemd[1]: Finished ensure-sysext.service. Jan 29 11:08:13.120135 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:08:13.130829 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:08:13.137167 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:08:13.138480 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:08:13.142310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:13.142475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:13.149188 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:08:13.149347 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:08:13.171952 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:13.172120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:13.172960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:08:13.173349 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:13.173503 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:13.178415 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:08:13.181386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:08:13.188064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:08:13.189154 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:08:13.198321 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Jan 29 11:08:13.206699 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:08:13.218654 augenrules[1362]: No rules Jan 29 11:08:13.224188 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:08:13.225107 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:08:13.239550 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:13.249122 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:08:13.284974 systemd-resolved[1320]: Positive Trust Anchors: Jan 29 11:08:13.285053 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:08:13.285085 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:08:13.293926 systemd-resolved[1320]: Using system hostname 'ci-4152-2-0-b-e71ed2fe96'. Jan 29 11:08:13.296947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:08:13.299117 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:13.300740 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:08:13.302324 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:08:13.337006 systemd-networkd[1370]: lo: Link UP Jan 29 11:08:13.337347 systemd-networkd[1370]: lo: Gained carrier Jan 29 11:08:13.338771 systemd-networkd[1370]: Enumeration completed Jan 29 11:08:13.339110 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:08:13.339871 systemd[1]: Reached target network.target - Network. Jan 29 11:08:13.354300 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:08:13.370826 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:08:13.454074 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:08:13.466735 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:13.466746 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:08:13.469043 systemd-networkd[1370]: eth0: Link UP Jan 29 11:08:13.469056 systemd-networkd[1370]: eth0: Gained carrier Jan 29 11:08:13.469080 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:13.493292 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 11:08:13.493417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:13.500183 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:13.507664 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:13.510774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:13.512050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:13.512089 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:08:13.515520 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:13.515721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:13.523574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:13.524055 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:13.525739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:08:13.529028 systemd-networkd[1370]: eth0: DHCPv4 address 138.199.151.137/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 11:08:13.530508 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jan 29 11:08:13.537695 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:13.537865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:13.540539 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:08:13.558548 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1381) Jan 29 11:08:13.572530 systemd-networkd[1370]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:13.573955 systemd-networkd[1370]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:08:13.574691 systemd-networkd[1370]: eth1: Link UP Jan 29 11:08:13.574695 systemd-networkd[1370]: eth1: Gained carrier Jan 29 11:08:13.574716 systemd-networkd[1370]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:13.575011 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jan 29 11:08:13.580869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:13.583023 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jan 29 11:08:13.600005 systemd-networkd[1370]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:08:13.602142 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jan 29 11:08:13.608949 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 11:08:13.609054 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:08:13.609080 kernel: [drm] features: -context_init Jan 29 11:08:13.609951 kernel: [drm] number of scanouts: 1 Jan 29 11:08:13.610013 kernel: [drm] number of cap sets: 0 Jan 29 11:08:13.612928 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 11:08:13.618480 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 11:08:13.619972 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 11:08:13.627936 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:08:13.633198 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:08:13.643029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:13.643244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:13.652345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:13.654734 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:08:13.702499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:13.758653 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:08:13.766305 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:08:13.783005 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:08:13.812721 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:08:13.815001 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:13.816680 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:08:13.818919 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:08:13.820571 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:08:13.821478 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:08:13.822247 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:08:13.823099 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:08:13.823832 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:08:13.823876 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:08:13.824472 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:08:13.826148 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:08:13.828248 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:08:13.835446 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:08:13.838040 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:08:13.839416 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:08:13.840235 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:08:13.840822 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:08:13.841940 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:08:13.841977 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:08:13.847131 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:08:13.852232 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:08:13.852867 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:08:13.856296 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:08:13.863066 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:08:13.867264 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:08:13.867803 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:08:13.871124 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:08:13.878441 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:08:13.882218 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 11:08:13.886205 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:08:13.891137 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:08:13.891822 jq[1443]: false Jan 29 11:08:13.904160 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:08:13.906455 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:08:13.909283 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:08:13.915080 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:08:13.919312 extend-filesystems[1444]: Found loop4 Jan 29 11:08:13.922166 extend-filesystems[1444]: Found loop5 Jan 29 11:08:13.922166 extend-filesystems[1444]: Found loop6 Jan 29 11:08:13.922166 extend-filesystems[1444]: Found loop7 Jan 29 11:08:13.922166 extend-filesystems[1444]: Found sda Jan 29 11:08:13.929939 extend-filesystems[1444]: Found sda1 Jan 29 11:08:13.929939 extend-filesystems[1444]: Found sda2 Jan 29 11:08:13.929939 extend-filesystems[1444]: Found sda3 Jan 29 11:08:13.929939 extend-filesystems[1444]: Found usr Jan 29 11:08:13.929939 extend-filesystems[1444]: Found sda4 Jan 29 11:08:13.929939 extend-filesystems[1444]: Found sda6 Jan 29 11:08:13.929939 extend-filesystems[1444]: Found sda7 Jan 29 11:08:13.929939 extend-filesystems[1444]: Found sda9 Jan 29 11:08:13.929939 extend-filesystems[1444]: Checking size of /dev/sda9 Jan 29 11:08:13.923039 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:08:13.924417 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:08:13.929307 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:08:13.929488 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:08:13.958196 dbus-daemon[1442]: [system] SELinux support is enabled Jan 29 11:08:13.960184 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:08:13.965333 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:08:13.965371 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:08:13.968106 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:08:13.968132 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:08:13.974406 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:08:13.974710 extend-filesystems[1444]: Resized partition /dev/sda9 Jan 29 11:08:13.975220 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:08:13.978523 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:08:13.982928 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 11:08:13.994572 update_engine[1456]: I20250129 11:08:13.994419 1456 main.cc:92] Flatcar Update Engine starting Jan 29 11:08:13.994921 jq[1458]: true Jan 29 11:08:13.997811 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:08:13.998115 coreos-metadata[1441]: Jan 29 11:08:13.998 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 11:08:13.999922 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:08:14.000829 update_engine[1456]: I20250129 11:08:14.000772 1456 update_check_scheduler.cc:74] Next update check in 7m18s Jan 29 11:08:14.002107 coreos-metadata[1441]: Jan 29 11:08:14.002 INFO Fetch successful Jan 29 11:08:14.002107 coreos-metadata[1441]: Jan 29 11:08:14.002 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 11:08:14.002250 coreos-metadata[1441]: Jan 29 11:08:14.002 INFO Fetch successful Jan 29 11:08:14.015298 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:08:14.019213 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:08:14.039272 jq[1481]: true Jan 29 11:08:14.045246 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:08:14.054077 tar[1473]: linux-arm64/helm Jan 29 11:08:14.112976 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1389) Jan 29 11:08:14.170538 systemd-logind[1452]: New seat seat0. Jan 29 11:08:14.201788 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 11:08:14.172675 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:08:14.172691 systemd-logind[1452]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 11:08:14.175848 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:08:14.191955 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:08:14.193326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:08:14.204459 extend-filesystems[1471]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 11:08:14.204459 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 11:08:14.204459 extend-filesystems[1471]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 11:08:14.208419 extend-filesystems[1444]: Resized filesystem in /dev/sda9 Jan 29 11:08:14.208419 extend-filesystems[1444]: Found sr0 Jan 29 11:08:14.206278 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:08:14.206482 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:08:14.232757 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:08:14.234349 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:08:14.247618 systemd[1]: Starting sshkeys.service... Jan 29 11:08:14.275211 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:08:14.284034 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:08:14.341540 coreos-metadata[1525]: Jan 29 11:08:14.341 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 11:08:14.344447 coreos-metadata[1525]: Jan 29 11:08:14.343 INFO Fetch successful Jan 29 11:08:14.348466 unknown[1525]: wrote ssh authorized keys file for user: core Jan 29 11:08:14.374650 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:08:14.383905 update-ssh-keys[1530]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:08:14.384987 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:08:14.389009 containerd[1474]: time="2025-01-29T11:08:14.388476120Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:08:14.390985 systemd[1]: Finished sshkeys.service. Jan 29 11:08:14.460919 containerd[1474]: time="2025-01-29T11:08:14.460845280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:14.463443 containerd[1474]: time="2025-01-29T11:08:14.463403400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.464899520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.464942840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465119000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465137520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465204480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465216400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465386440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465402040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465415760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465425000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465489320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466104 containerd[1474]: time="2025-01-29T11:08:14.465674240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466370 containerd[1474]: time="2025-01-29T11:08:14.465763720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:14.466370 containerd[1474]: time="2025-01-29T11:08:14.465777280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:08:14.466370 containerd[1474]: time="2025-01-29T11:08:14.465844160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:08:14.466370 containerd[1474]: time="2025-01-29T11:08:14.465941280Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:08:14.474941 containerd[1474]: time="2025-01-29T11:08:14.474765920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:08:14.474941 containerd[1474]: time="2025-01-29T11:08:14.474835120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:08:14.474941 containerd[1474]: time="2025-01-29T11:08:14.474852040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:08:14.474941 containerd[1474]: time="2025-01-29T11:08:14.474868760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:08:14.474941 containerd[1474]: time="2025-01-29T11:08:14.474912400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475295120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475564440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475661840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475677680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475693080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475708320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475721600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475734480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475748360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475763120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475776320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475789400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475801120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:08:14.475943 containerd[1474]: time="2025-01-29T11:08:14.475822560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.476222 containerd[1474]: time="2025-01-29T11:08:14.475835520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.476222 containerd[1474]: time="2025-01-29T11:08:14.475848840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.476222 containerd[1474]: time="2025-01-29T11:08:14.475862240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.477932600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.477970960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.477988000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478001320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478014760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478032280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478045040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478059040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478074320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478089440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478128640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478146400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478158520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:08:14.479896 containerd[1474]: time="2025-01-29T11:08:14.478341200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:08:14.480183 containerd[1474]: time="2025-01-29T11:08:14.478361080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:08:14.480183 containerd[1474]: time="2025-01-29T11:08:14.478375120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:08:14.480183 containerd[1474]: time="2025-01-29T11:08:14.478387040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:08:14.480183 containerd[1474]: time="2025-01-29T11:08:14.478396440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.480183 containerd[1474]: time="2025-01-29T11:08:14.478416120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:08:14.480183 containerd[1474]: time="2025-01-29T11:08:14.478427120Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:08:14.480183 containerd[1474]: time="2025-01-29T11:08:14.478437400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:08:14.480302 containerd[1474]: time="2025-01-29T11:08:14.478777960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:08:14.480302 containerd[1474]: time="2025-01-29T11:08:14.478824760Z" level=info msg="Connect containerd service" Jan 29 11:08:14.480302 containerd[1474]: time="2025-01-29T11:08:14.478855720Z" level=info msg="using legacy CRI server" Jan 29 11:08:14.480302 containerd[1474]: time="2025-01-29T11:08:14.478862040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:08:14.483229 containerd[1474]: time="2025-01-29T11:08:14.483188760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:08:14.486376 containerd[1474]: time="2025-01-29T11:08:14.486322440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:08:14.489478 containerd[1474]: time="2025-01-29T11:08:14.489161680Z" level=info msg="Start subscribing containerd event" Jan 29 11:08:14.489478 containerd[1474]: time="2025-01-29T11:08:14.489232160Z" level=info msg="Start recovering state" Jan 29 11:08:14.489478 containerd[1474]: time="2025-01-29T11:08:14.489420600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:08:14.490358 containerd[1474]: time="2025-01-29T11:08:14.490334760Z" level=info msg="Start event monitor" Jan 29 11:08:14.490390 containerd[1474]: time="2025-01-29T11:08:14.490362920Z" level=info msg="Start snapshots syncer" Jan 29 11:08:14.490390 containerd[1474]: time="2025-01-29T11:08:14.490374200Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:08:14.490390 containerd[1474]: time="2025-01-29T11:08:14.490384800Z" level=info msg="Start streaming server" Jan 29 11:08:14.493198 containerd[1474]: time="2025-01-29T11:08:14.492004600Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:08:14.493198 containerd[1474]: time="2025-01-29T11:08:14.492099680Z" level=info msg="containerd successfully booted in 0.110963s" Jan 29 11:08:14.492217 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:08:14.681410 tar[1473]: linux-arm64/LICENSE Jan 29 11:08:14.681498 tar[1473]: linux-arm64/README.md Jan 29 11:08:14.695412 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:08:14.851036 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:08:14.878994 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:08:14.890526 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:08:14.899960 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:08:14.900188 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:08:14.908292 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:08:14.919444 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:08:14.925573 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:08:14.934602 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:08:14.935691 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:08:15.227172 systemd-networkd[1370]: eth1: Gained IPv6LL Jan 29 11:08:15.228282 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jan 29 11:08:15.231307 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:08:15.233732 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:08:15.242329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:15.245382 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:08:15.269981 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:08:15.483281 systemd-networkd[1370]: eth0: Gained IPv6LL Jan 29 11:08:15.483785 systemd-timesyncd[1334]: Network configuration changed, trying to establish connection. Jan 29 11:08:15.932580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:15.933861 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:08:15.938565 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:08:15.939052 systemd[1]: Startup finished in 767ms (kernel) + 5.548s (initrd) + 4.731s (userspace) = 11.047s. Jan 29 11:08:16.487048 kubelet[1571]: E0129 11:08:16.486957 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:08:16.490641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:08:16.490829 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:08:26.741463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:08:26.758252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:26.865183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:26.877372 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:08:26.931421 kubelet[1590]: E0129 11:08:26.931325 1590 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:08:26.934665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:08:26.934894 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:08:37.185710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:08:37.204203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:37.319483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:37.331854 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:08:37.377033 kubelet[1605]: E0129 11:08:37.376981 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:08:37.380099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:08:37.380342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:08:45.688798 systemd-timesyncd[1334]: Contacted time server 194.25.134.196:123 (2.flatcar.pool.ntp.org). Jan 29 11:08:45.688954 systemd-timesyncd[1334]: Initial clock synchronization to Wed 2025-01-29 11:08:45.658475 UTC. Jan 29 11:08:47.401436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:08:47.413213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:47.539155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:47.540804 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:08:47.579717 kubelet[1621]: E0129 11:08:47.579665 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:08:47.582356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:08:47.582624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:08:57.650955 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:08:57.660248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:57.784190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:57.797518 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:08:57.848193 kubelet[1637]: E0129 11:08:57.848135 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:08:57.850241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:08:57.850372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:08:58.920060 update_engine[1456]: I20250129 11:08:58.919046 1456 update_attempter.cc:509] Updating boot flags... Jan 29 11:08:58.964905 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1653) Jan 29 11:08:59.037930 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1656) Jan 29 11:09:07.901069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 11:09:07.907241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:08.021845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:08.034469 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:08.080211 kubelet[1670]: E0129 11:09:08.080131 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:08.084186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:08.084408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:18.151165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 11:09:18.168290 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:18.294914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:18.306507 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:18.355740 kubelet[1686]: E0129 11:09:18.355676 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:18.358641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:18.358809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:28.401133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 11:09:28.406323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:28.525096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:28.530036 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:28.572538 kubelet[1700]: E0129 11:09:28.572468 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:28.574631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:28.574811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:38.651489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 11:09:38.661003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:38.770619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:38.775291 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:38.818069 kubelet[1715]: E0129 11:09:38.818002 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:38.820365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:38.820513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:48.901581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 11:09:48.912266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:49.041109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:49.041545 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:49.086271 kubelet[1730]: E0129 11:09:49.086203 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:49.088794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:49.088997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:09:59.151548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 11:09:59.167441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:09:59.293116 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:09:59.295807 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:09:59.339096 kubelet[1744]: E0129 11:09:59.339032 1744 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:09:59.341224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:09:59.341362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:10:09.400996 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 11:10:09.407524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:09.539205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:09.539304 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:10:09.584559 kubelet[1758]: E0129 11:10:09.584456 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:10:09.587427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:10:09.587627 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:10:11.083178 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:10:11.093380 systemd[1]: Started sshd@0-138.199.151.137:22-147.75.109.163:38382.service - OpenSSH per-connection server daemon (147.75.109.163:38382). Jan 29 11:10:12.101761 sshd[1766]: Accepted publickey for core from 147.75.109.163 port 38382 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:12.104302 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:12.114966 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:10:12.122343 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:10:12.126178 systemd-logind[1452]: New session 1 of user core. Jan 29 11:10:12.136304 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:10:12.142386 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:10:12.154839 (systemd)[1770]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:10:12.258010 systemd[1770]: Queued start job for default target default.target. Jan 29 11:10:12.271650 systemd[1770]: Created slice app.slice - User Application Slice. Jan 29 11:10:12.271719 systemd[1770]: Reached target paths.target - Paths. Jan 29 11:10:12.271750 systemd[1770]: Reached target timers.target - Timers. Jan 29 11:10:12.274624 systemd[1770]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:10:12.288324 systemd[1770]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:10:12.288501 systemd[1770]: Reached target sockets.target - Sockets. Jan 29 11:10:12.288541 systemd[1770]: Reached target basic.target - Basic System. Jan 29 11:10:12.288621 systemd[1770]: Reached target default.target - Main User Target. Jan 29 11:10:12.288665 systemd[1770]: Startup finished in 126ms. Jan 29 11:10:12.288814 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:10:12.301225 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:10:13.002032 systemd[1]: Started sshd@1-138.199.151.137:22-147.75.109.163:38396.service - OpenSSH per-connection server daemon (147.75.109.163:38396). Jan 29 11:10:14.004351 sshd[1781]: Accepted publickey for core from 147.75.109.163 port 38396 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:14.006473 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:14.011033 systemd-logind[1452]: New session 2 of user core. Jan 29 11:10:14.022230 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:10:14.693716 sshd[1783]: Connection closed by 147.75.109.163 port 38396 Jan 29 11:10:14.693341 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:14.698949 systemd[1]: sshd@1-138.199.151.137:22-147.75.109.163:38396.service: Deactivated successfully. Jan 29 11:10:14.700597 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:10:14.702085 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:10:14.703474 systemd-logind[1452]: Removed session 2. Jan 29 11:10:14.868706 systemd[1]: Started sshd@2-138.199.151.137:22-147.75.109.163:38406.service - OpenSSH per-connection server daemon (147.75.109.163:38406). Jan 29 11:10:15.861133 sshd[1788]: Accepted publickey for core from 147.75.109.163 port 38406 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:15.863468 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:15.869180 systemd-logind[1452]: New session 3 of user core. Jan 29 11:10:15.877404 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:10:16.542311 sshd[1790]: Connection closed by 147.75.109.163 port 38406 Jan 29 11:10:16.543159 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:16.548877 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:10:16.550017 systemd[1]: sshd@2-138.199.151.137:22-147.75.109.163:38406.service: Deactivated successfully. Jan 29 11:10:16.552149 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:10:16.553816 systemd-logind[1452]: Removed session 3. Jan 29 11:10:16.720363 systemd[1]: Started sshd@3-138.199.151.137:22-147.75.109.163:38416.service - OpenSSH per-connection server daemon (147.75.109.163:38416). Jan 29 11:10:17.695862 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 38416 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:17.698251 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:17.702699 systemd-logind[1452]: New session 4 of user core. Jan 29 11:10:17.711197 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:10:18.369979 sshd[1797]: Connection closed by 147.75.109.163 port 38416 Jan 29 11:10:18.371011 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:18.375422 systemd[1]: sshd@3-138.199.151.137:22-147.75.109.163:38416.service: Deactivated successfully. Jan 29 11:10:18.377273 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:10:18.378405 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:10:18.379484 systemd-logind[1452]: Removed session 4. Jan 29 11:10:18.552589 systemd[1]: Started sshd@4-138.199.151.137:22-147.75.109.163:45910.service - OpenSSH per-connection server daemon (147.75.109.163:45910). Jan 29 11:10:19.538478 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 45910 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:19.541166 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:19.546698 systemd-logind[1452]: New session 5 of user core. Jan 29 11:10:19.558288 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:10:19.651083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 11:10:19.656270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:19.783132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:19.785793 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:10:19.826749 kubelet[1813]: E0129 11:10:19.826544 1813 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:10:19.829474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:10:19.829658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:10:20.074189 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:10:20.075323 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:10:20.099854 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 29 11:10:20.259728 sshd[1804]: Connection closed by 147.75.109.163 port 45910 Jan 29 11:10:20.261007 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:20.268613 systemd[1]: sshd@4-138.199.151.137:22-147.75.109.163:45910.service: Deactivated successfully. Jan 29 11:10:20.270574 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:10:20.271581 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:10:20.272805 systemd-logind[1452]: Removed session 5. Jan 29 11:10:20.437225 systemd[1]: Started sshd@5-138.199.151.137:22-147.75.109.163:45918.service - OpenSSH per-connection server daemon (147.75.109.163:45918). Jan 29 11:10:21.425538 sshd[1825]: Accepted publickey for core from 147.75.109.163 port 45918 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:21.427468 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:21.432459 systemd-logind[1452]: New session 6 of user core. Jan 29 11:10:21.443242 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:10:21.949516 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:10:21.949849 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:10:21.953731 sudo[1829]: pam_unix(sudo:session): session closed for user root Jan 29 11:10:21.961621 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:10:21.962091 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:10:21.983170 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:10:22.013363 augenrules[1851]: No rules Jan 29 11:10:22.014213 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:10:22.014503 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:10:22.016201 sudo[1828]: pam_unix(sudo:session): session closed for user root Jan 29 11:10:22.176914 sshd[1827]: Connection closed by 147.75.109.163 port 45918 Jan 29 11:10:22.176252 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:22.180254 systemd[1]: sshd@5-138.199.151.137:22-147.75.109.163:45918.service: Deactivated successfully. Jan 29 11:10:22.182696 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:10:22.185697 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:10:22.186974 systemd-logind[1452]: Removed session 6. Jan 29 11:10:22.354455 systemd[1]: Started sshd@6-138.199.151.137:22-147.75.109.163:45922.service - OpenSSH per-connection server daemon (147.75.109.163:45922). Jan 29 11:10:23.343767 sshd[1859]: Accepted publickey for core from 147.75.109.163 port 45922 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:10:23.346294 sshd-session[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:10:23.353282 systemd-logind[1452]: New session 7 of user core. Jan 29 11:10:23.364267 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:10:23.869897 sudo[1862]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:10:23.870198 sudo[1862]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:10:24.176200 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:10:24.178959 (dockerd)[1880]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:10:24.417297 dockerd[1880]: time="2025-01-29T11:10:24.416796752Z" level=info msg="Starting up" Jan 29 11:10:24.509249 dockerd[1880]: time="2025-01-29T11:10:24.508638575Z" level=info msg="Loading containers: start." Jan 29 11:10:24.683941 kernel: Initializing XFRM netlink socket Jan 29 11:10:24.774848 systemd-networkd[1370]: docker0: Link UP Jan 29 11:10:24.814875 dockerd[1880]: time="2025-01-29T11:10:24.814784089Z" level=info msg="Loading containers: done." Jan 29 11:10:24.830835 dockerd[1880]: time="2025-01-29T11:10:24.830748348Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:10:24.831051 dockerd[1880]: time="2025-01-29T11:10:24.830913592Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:10:24.831051 dockerd[1880]: time="2025-01-29T11:10:24.831047276Z" level=info msg="Daemon has completed initialization" Jan 29 11:10:24.874617 dockerd[1880]: time="2025-01-29T11:10:24.874514628Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:10:24.874844 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:10:25.906916 containerd[1474]: time="2025-01-29T11:10:25.906778825Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 29 11:10:26.525529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880490455.mount: Deactivated successfully. Jan 29 11:10:28.118934 containerd[1474]: time="2025-01-29T11:10:28.118819329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:28.120903 containerd[1474]: time="2025-01-29T11:10:28.120821732Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=25618162" Jan 29 11:10:28.121626 containerd[1474]: time="2025-01-29T11:10:28.121233981Z" level=info msg="ImageCreate event name:\"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:28.124420 containerd[1474]: time="2025-01-29T11:10:28.124352449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:28.125745 containerd[1474]: time="2025-01-29T11:10:28.125549315Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"25614870\" in 2.218725448s" Jan 29 11:10:28.125745 containerd[1474]: time="2025-01-29T11:10:28.125598156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:c33b6b5a9aa5348a4f3ab96e0977e49acb8ca86c4ec3973023e12c0083423692\"" Jan 29 11:10:28.126643 containerd[1474]: time="2025-01-29T11:10:28.126439934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 29 11:10:29.900765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 11:10:29.910398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:30.035173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:30.039840 (kubelet)[2134]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:10:30.082822 kubelet[2134]: E0129 11:10:30.082389 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:10:30.085117 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:10:30.085260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:10:30.127177 containerd[1474]: time="2025-01-29T11:10:30.127018730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:30.128935 containerd[1474]: time="2025-01-29T11:10:30.128570402Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=22469487" Jan 29 11:10:30.130077 containerd[1474]: time="2025-01-29T11:10:30.130009512Z" level=info msg="ImageCreate event name:\"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:30.134136 containerd[1474]: time="2025-01-29T11:10:30.134051996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:30.135707 containerd[1474]: time="2025-01-29T11:10:30.135545787Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"23873257\" in 2.009073893s" Jan 29 11:10:30.135707 containerd[1474]: time="2025-01-29T11:10:30.135592948Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:678a3aee724f5d7904c30cda32c06f842784d67e7bd0cece4225fa7c1dcd0c73\"" Jan 29 11:10:30.136534 containerd[1474]: time="2025-01-29T11:10:30.136479886Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 29 11:10:31.843243 containerd[1474]: time="2025-01-29T11:10:31.843094810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:31.845172 containerd[1474]: time="2025-01-29T11:10:31.845105811Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=17024237" Jan 29 11:10:31.845913 containerd[1474]: time="2025-01-29T11:10:31.845519779Z" level=info msg="ImageCreate event name:\"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:31.849870 containerd[1474]: time="2025-01-29T11:10:31.849753865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:31.852012 containerd[1474]: time="2025-01-29T11:10:31.851950710Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"18428025\" in 1.715264259s" Jan 29 11:10:31.852443 containerd[1474]: time="2025-01-29T11:10:31.852204035Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:066a1dc527aec5b7c19bcf4b81f92b15816afc78e9713266d355333b7eb81050\"" Jan 29 11:10:31.853189 containerd[1474]: time="2025-01-29T11:10:31.853053172Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:10:32.831880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874087643.mount: Deactivated successfully. Jan 29 11:10:33.160659 containerd[1474]: time="2025-01-29T11:10:33.160374262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:33.161858 containerd[1474]: time="2025-01-29T11:10:33.161774569Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=26772143" Jan 29 11:10:33.163113 containerd[1474]: time="2025-01-29T11:10:33.163024434Z" level=info msg="ImageCreate event name:\"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:33.165546 containerd[1474]: time="2025-01-29T11:10:33.165472841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:33.166391 containerd[1474]: time="2025-01-29T11:10:33.166231256Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"26771136\" in 1.313136923s" Jan 29 11:10:33.166391 containerd[1474]: time="2025-01-29T11:10:33.166274937Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:571bb7ded0ff97311ed313f069becb58480cd66da04175981cfee2f3affe3e95\"" Jan 29 11:10:33.167076 containerd[1474]: time="2025-01-29T11:10:33.166827508Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:10:33.801680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045786314.mount: Deactivated successfully. Jan 29 11:10:34.438789 containerd[1474]: time="2025-01-29T11:10:34.438704380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:34.440320 containerd[1474]: time="2025-01-29T11:10:34.440000965Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 29 11:10:34.441563 containerd[1474]: time="2025-01-29T11:10:34.441100906Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:34.444663 containerd[1474]: time="2025-01-29T11:10:34.444591212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:34.446087 containerd[1474]: time="2025-01-29T11:10:34.445880997Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.279020129s" Jan 29 11:10:34.446087 containerd[1474]: time="2025-01-29T11:10:34.445950318Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:10:34.446809 containerd[1474]: time="2025-01-29T11:10:34.446598010Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:10:34.984694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount821711171.mount: Deactivated successfully. Jan 29 11:10:34.993364 containerd[1474]: time="2025-01-29T11:10:34.992590804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:34.994268 containerd[1474]: time="2025-01-29T11:10:34.994199755Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jan 29 11:10:34.996298 containerd[1474]: time="2025-01-29T11:10:34.995788065Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:34.998710 containerd[1474]: time="2025-01-29T11:10:34.998641919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:34.999498 containerd[1474]: time="2025-01-29T11:10:34.999340653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 552.710162ms" Jan 29 11:10:34.999498 containerd[1474]: time="2025-01-29T11:10:34.999379933Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 11:10:35.000302 containerd[1474]: time="2025-01-29T11:10:34.999973425Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 29 11:10:35.558622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126801516.mount: Deactivated successfully. Jan 29 11:10:36.990312 containerd[1474]: time="2025-01-29T11:10:36.990238708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:36.991880 containerd[1474]: time="2025-01-29T11:10:36.991812177Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Jan 29 11:10:36.992789 containerd[1474]: time="2025-01-29T11:10:36.992722394Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:36.996652 containerd[1474]: time="2025-01-29T11:10:36.996563984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:10:36.998985 containerd[1474]: time="2025-01-29T11:10:36.998250255Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.998186908s" Jan 29 11:10:36.998985 containerd[1474]: time="2025-01-29T11:10:36.998299495Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jan 29 11:10:40.151433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 29 11:10:40.159208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:40.278165 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:40.290428 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:10:40.341896 kubelet[2282]: E0129 11:10:40.339675 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:10:40.342850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:10:40.343032 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:10:42.468248 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:42.475458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:42.526373 systemd[1]: Reloading requested from client PID 2297 ('systemctl') (unit session-7.scope)... Jan 29 11:10:42.526541 systemd[1]: Reloading... Jan 29 11:10:42.642932 zram_generator::config[2337]: No configuration found. Jan 29 11:10:42.755745 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:10:42.823458 systemd[1]: Reloading finished in 296 ms. Jan 29 11:10:42.889204 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:42.893613 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:10:42.893875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:42.898761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:43.013950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:43.026360 (kubelet)[2387]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:10:43.071923 kubelet[2387]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:43.071923 kubelet[2387]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:10:43.071923 kubelet[2387]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:43.071923 kubelet[2387]: I0129 11:10:43.070537 2387 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:10:43.891960 kubelet[2387]: I0129 11:10:43.891131 2387 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:10:43.891960 kubelet[2387]: I0129 11:10:43.891171 2387 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:10:43.891960 kubelet[2387]: I0129 11:10:43.891429 2387 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:10:43.922116 kubelet[2387]: E0129 11:10:43.922041 2387 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://138.199.151.137:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:43.923188 kubelet[2387]: I0129 11:10:43.923164 2387 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:10:43.934939 kubelet[2387]: E0129 11:10:43.934845 2387 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:10:43.934939 kubelet[2387]: I0129 11:10:43.934880 2387 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:10:43.938786 kubelet[2387]: I0129 11:10:43.938716 2387 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:10:43.939970 kubelet[2387]: I0129 11:10:43.939930 2387 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:10:43.940412 kubelet[2387]: I0129 11:10:43.940349 2387 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:10:43.940593 kubelet[2387]: I0129 11:10:43.940387 2387 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-b-e71ed2fe96","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:10:43.940724 kubelet[2387]: I0129 11:10:43.940648 2387 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:10:43.940724 kubelet[2387]: I0129 11:10:43.940658 2387 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:10:43.941041 kubelet[2387]: I0129 11:10:43.940838 2387 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:43.942652 kubelet[2387]: I0129 11:10:43.942614 2387 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:10:43.942652 kubelet[2387]: I0129 11:10:43.942644 2387 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:10:43.942749 kubelet[2387]: I0129 11:10:43.942670 2387 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:10:43.942749 kubelet[2387]: I0129 11:10:43.942688 2387 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:10:43.947185 kubelet[2387]: W0129 11:10:43.947065 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.151.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-b-e71ed2fe96&limit=500&resourceVersion=0": dial tcp 138.199.151.137:6443: connect: connection refused Jan 29 11:10:43.947960 kubelet[2387]: E0129 11:10:43.947426 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.151.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-b-e71ed2fe96&limit=500&resourceVersion=0\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:43.947960 kubelet[2387]: I0129 11:10:43.947622 2387 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:10:43.949964 kubelet[2387]: I0129 11:10:43.949934 2387 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:10:43.951348 kubelet[2387]: W0129 11:10:43.951325 2387 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:10:43.953505 kubelet[2387]: W0129 11:10:43.953432 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.151.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.151.137:6443: connect: connection refused Jan 29 11:10:43.953505 kubelet[2387]: E0129 11:10:43.953495 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.151.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:43.953505 kubelet[2387]: I0129 11:10:43.953812 2387 server.go:1269] "Started kubelet" Jan 29 11:10:43.956188 kubelet[2387]: I0129 11:10:43.956150 2387 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:10:43.958138 kubelet[2387]: E0129 11:10:43.956754 2387 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.151.137:6443/api/v1/namespaces/default/events\": dial tcp 138.199.151.137:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-b-e71ed2fe96.181f2558ab8cbb74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-b-e71ed2fe96,UID:ci-4152-2-0-b-e71ed2fe96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-b-e71ed2fe96,},FirstTimestamp:2025-01-29 11:10:43.953777524 +0000 UTC m=+0.922775467,LastTimestamp:2025-01-29 11:10:43.953777524 +0000 UTC m=+0.922775467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-b-e71ed2fe96,}" Jan 29 11:10:43.961393 kubelet[2387]: I0129 11:10:43.961332 2387 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:10:43.962924 kubelet[2387]: I0129 11:10:43.962436 2387 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:10:43.963380 kubelet[2387]: I0129 11:10:43.963304 2387 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:10:43.963644 kubelet[2387]: I0129 11:10:43.963550 2387 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:10:43.963704 kubelet[2387]: I0129 11:10:43.963690 2387 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:10:43.963788 kubelet[2387]: I0129 11:10:43.963764 2387 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:10:43.964284 kubelet[2387]: E0129 11:10:43.964265 2387 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-0-b-e71ed2fe96\" not found" Jan 29 11:10:43.965009 kubelet[2387]: I0129 11:10:43.964991 2387 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:10:43.965192 kubelet[2387]: I0129 11:10:43.965178 2387 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:10:43.966229 kubelet[2387]: W0129 11:10:43.966120 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.151.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.151.137:6443: connect: connection refused Jan 29 11:10:43.966229 kubelet[2387]: E0129 11:10:43.966189 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.151.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:43.966973 kubelet[2387]: I0129 11:10:43.966501 2387 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:10:43.966973 kubelet[2387]: I0129 11:10:43.966589 2387 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:10:43.969045 kubelet[2387]: E0129 11:10:43.968794 2387 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:10:43.969389 kubelet[2387]: E0129 11:10:43.969358 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.151.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-b-e71ed2fe96?timeout=10s\": dial tcp 138.199.151.137:6443: connect: connection refused" interval="200ms" Jan 29 11:10:43.970014 kubelet[2387]: I0129 11:10:43.969708 2387 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:10:43.978254 kubelet[2387]: I0129 11:10:43.978055 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:10:43.980168 kubelet[2387]: I0129 11:10:43.980135 2387 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:10:43.980691 kubelet[2387]: I0129 11:10:43.980297 2387 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:10:43.980691 kubelet[2387]: I0129 11:10:43.980323 2387 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:10:43.980691 kubelet[2387]: E0129 11:10:43.980443 2387 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:10:43.993039 kubelet[2387]: W0129 11:10:43.992963 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.151.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.151.137:6443: connect: connection refused Jan 29 11:10:43.993337 kubelet[2387]: E0129 11:10:43.993307 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.151.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:44.001908 kubelet[2387]: I0129 11:10:44.001854 2387 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:10:44.002334 kubelet[2387]: I0129 11:10:44.002055 2387 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:10:44.002334 kubelet[2387]: I0129 11:10:44.002094 2387 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:44.004749 kubelet[2387]: I0129 11:10:44.004697 2387 policy_none.go:49] "None policy: Start" Jan 29 11:10:44.005739 kubelet[2387]: I0129 11:10:44.005692 2387 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:10:44.005826 kubelet[2387]: I0129 11:10:44.005763 2387 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:10:44.012432 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:10:44.035617 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:10:44.042246 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:10:44.051935 kubelet[2387]: I0129 11:10:44.051689 2387 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:10:44.052473 kubelet[2387]: I0129 11:10:44.052289 2387 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:10:44.052550 kubelet[2387]: I0129 11:10:44.052488 2387 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:10:44.053763 kubelet[2387]: I0129 11:10:44.053534 2387 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:10:44.056504 kubelet[2387]: E0129 11:10:44.056387 2387 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-b-e71ed2fe96\" not found" Jan 29 11:10:44.095994 systemd[1]: Created slice kubepods-burstable-pod5be7abbaa13e091e11ab0c7696d62446.slice - libcontainer container kubepods-burstable-pod5be7abbaa13e091e11ab0c7696d62446.slice. Jan 29 11:10:44.123522 systemd[1]: Created slice kubepods-burstable-pod345b4fcf7d0b600eda1396915ca8fd57.slice - libcontainer container kubepods-burstable-pod345b4fcf7d0b600eda1396915ca8fd57.slice. Jan 29 11:10:44.142532 systemd[1]: Created slice kubepods-burstable-pod7b9f70bee690eb7c89d058557c941efc.slice - libcontainer container kubepods-burstable-pod7b9f70bee690eb7c89d058557c941efc.slice. Jan 29 11:10:44.156067 kubelet[2387]: I0129 11:10:44.155909 2387 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.156587 kubelet[2387]: E0129 11:10:44.156485 2387 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.151.137:6443/api/v1/nodes\": dial tcp 138.199.151.137:6443: connect: connection refused" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.166532 kubelet[2387]: I0129 11:10:44.166372 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.166532 kubelet[2387]: I0129 11:10:44.166435 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/345b4fcf7d0b600eda1396915ca8fd57-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-b-e71ed2fe96\" (UID: \"345b4fcf7d0b600eda1396915ca8fd57\") " pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.166532 kubelet[2387]: I0129 11:10:44.166473 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/345b4fcf7d0b600eda1396915ca8fd57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-b-e71ed2fe96\" (UID: \"345b4fcf7d0b600eda1396915ca8fd57\") " pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.166532 kubelet[2387]: I0129 11:10:44.166505 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.166532 kubelet[2387]: I0129 11:10:44.166539 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.167006 kubelet[2387]: I0129 11:10:44.166573 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.167006 kubelet[2387]: I0129 11:10:44.166610 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.167006 kubelet[2387]: I0129 11:10:44.166647 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b9f70bee690eb7c89d058557c941efc-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-b-e71ed2fe96\" (UID: \"7b9f70bee690eb7c89d058557c941efc\") " pod="kube-system/kube-scheduler-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.167006 kubelet[2387]: I0129 11:10:44.166690 2387 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/345b4fcf7d0b600eda1396915ca8fd57-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-b-e71ed2fe96\" (UID: \"345b4fcf7d0b600eda1396915ca8fd57\") " pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.170615 kubelet[2387]: E0129 11:10:44.170563 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.151.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-b-e71ed2fe96?timeout=10s\": dial tcp 138.199.151.137:6443: connect: connection refused" interval="400ms" Jan 29 11:10:44.360036 kubelet[2387]: I0129 11:10:44.359429 2387 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.360036 kubelet[2387]: E0129 11:10:44.359962 2387 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.151.137:6443/api/v1/nodes\": dial tcp 138.199.151.137:6443: connect: connection refused" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.420380 containerd[1474]: time="2025-01-29T11:10:44.420316174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-b-e71ed2fe96,Uid:5be7abbaa13e091e11ab0c7696d62446,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:44.428870 containerd[1474]: time="2025-01-29T11:10:44.428469342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-b-e71ed2fe96,Uid:345b4fcf7d0b600eda1396915ca8fd57,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:44.450821 containerd[1474]: time="2025-01-29T11:10:44.450751729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-b-e71ed2fe96,Uid:7b9f70bee690eb7c89d058557c941efc,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:44.571991 kubelet[2387]: E0129 11:10:44.571841 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.151.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-b-e71ed2fe96?timeout=10s\": dial tcp 138.199.151.137:6443: connect: connection refused" interval="800ms" Jan 29 11:10:44.764080 kubelet[2387]: I0129 11:10:44.763866 2387 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.764693 kubelet[2387]: E0129 11:10:44.764323 2387 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://138.199.151.137:6443/api/v1/nodes\": dial tcp 138.199.151.137:6443: connect: connection refused" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:44.839982 kubelet[2387]: W0129 11:10:44.839927 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://138.199.151.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.151.137:6443: connect: connection refused Jan 29 11:10:44.840209 kubelet[2387]: E0129 11:10:44.839992 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://138.199.151.137:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:44.945005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762013165.mount: Deactivated successfully. Jan 29 11:10:44.953271 containerd[1474]: time="2025-01-29T11:10:44.953214166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:44.958286 containerd[1474]: time="2025-01-29T11:10:44.958091442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 11:10:44.960547 containerd[1474]: time="2025-01-29T11:10:44.959505664Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:44.960652 kubelet[2387]: W0129 11:10:44.960286 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://138.199.151.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 138.199.151.137:6443: connect: connection refused Jan 29 11:10:44.960652 kubelet[2387]: E0129 11:10:44.960464 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://138.199.151.137:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:44.962280 containerd[1474]: time="2025-01-29T11:10:44.961933102Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:44.965065 containerd[1474]: time="2025-01-29T11:10:44.964994990Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:10:44.965852 kubelet[2387]: W0129 11:10:44.965723 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://138.199.151.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.151.137:6443: connect: connection refused Jan 29 11:10:44.965852 kubelet[2387]: E0129 11:10:44.965786 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://138.199.151.137:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:44.967603 containerd[1474]: time="2025-01-29T11:10:44.967474868Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:10:44.968516 containerd[1474]: time="2025-01-29T11:10:44.968461884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:44.974500 containerd[1474]: time="2025-01-29T11:10:44.973866448Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 522.980837ms" Jan 29 11:10:44.976961 containerd[1474]: time="2025-01-29T11:10:44.976687252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.257036ms" Jan 29 11:10:44.977514 containerd[1474]: time="2025-01-29T11:10:44.977481384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:10:44.979045 containerd[1474]: time="2025-01-29T11:10:44.979011368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.392504ms" Jan 29 11:10:45.094535 containerd[1474]: time="2025-01-29T11:10:45.094244819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:45.094535 containerd[1474]: time="2025-01-29T11:10:45.094314020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:45.094535 containerd[1474]: time="2025-01-29T11:10:45.094336740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:45.094535 containerd[1474]: time="2025-01-29T11:10:45.094418581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:45.101618 containerd[1474]: time="2025-01-29T11:10:45.101375808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:45.101618 containerd[1474]: time="2025-01-29T11:10:45.101448329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:45.101618 containerd[1474]: time="2025-01-29T11:10:45.101464929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:45.101913 containerd[1474]: time="2025-01-29T11:10:45.101749533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:45.108325 containerd[1474]: time="2025-01-29T11:10:45.108075470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:45.108325 containerd[1474]: time="2025-01-29T11:10:45.108161072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:45.108325 containerd[1474]: time="2025-01-29T11:10:45.108182672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:45.108325 containerd[1474]: time="2025-01-29T11:10:45.108266273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:45.119327 systemd[1]: Started cri-containerd-19ffc8f5850fe692071229398b84f171be4cda4344c6b1418721848dd4fd15ee.scope - libcontainer container 19ffc8f5850fe692071229398b84f171be4cda4344c6b1418721848dd4fd15ee. Jan 29 11:10:45.127007 kubelet[2387]: W0129 11:10:45.125969 2387 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://138.199.151.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-b-e71ed2fe96&limit=500&resourceVersion=0": dial tcp 138.199.151.137:6443: connect: connection refused Jan 29 11:10:45.127007 kubelet[2387]: E0129 11:10:45.126032 2387 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://138.199.151.137:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-b-e71ed2fe96&limit=500&resourceVersion=0\": dial tcp 138.199.151.137:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:10:45.138168 systemd[1]: Started cri-containerd-0298116b07414285d92d425febc2c8cc9b4f5c5c3e6ec5acc888ca72e9b2205c.scope - libcontainer container 0298116b07414285d92d425febc2c8cc9b4f5c5c3e6ec5acc888ca72e9b2205c. Jan 29 11:10:45.144257 systemd[1]: Started cri-containerd-77f971f9e7a29b83651ae296e3eb51c0cc18c936d2fff32de2625a46eb4b36be.scope - libcontainer container 77f971f9e7a29b83651ae296e3eb51c0cc18c936d2fff32de2625a46eb4b36be. Jan 29 11:10:45.182937 containerd[1474]: time="2025-01-29T11:10:45.181988922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-b-e71ed2fe96,Uid:5be7abbaa13e091e11ab0c7696d62446,Namespace:kube-system,Attempt:0,} returns sandbox id \"19ffc8f5850fe692071229398b84f171be4cda4344c6b1418721848dd4fd15ee\"" Jan 29 11:10:45.189782 containerd[1474]: time="2025-01-29T11:10:45.189725920Z" level=info msg="CreateContainer within sandbox \"19ffc8f5850fe692071229398b84f171be4cda4344c6b1418721848dd4fd15ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:10:45.196567 containerd[1474]: time="2025-01-29T11:10:45.196515344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-b-e71ed2fe96,Uid:345b4fcf7d0b600eda1396915ca8fd57,Namespace:kube-system,Attempt:0,} returns sandbox id \"77f971f9e7a29b83651ae296e3eb51c0cc18c936d2fff32de2625a46eb4b36be\"" Jan 29 11:10:45.208423 containerd[1474]: time="2025-01-29T11:10:45.208378646Z" level=info msg="CreateContainer within sandbox \"77f971f9e7a29b83651ae296e3eb51c0cc18c936d2fff32de2625a46eb4b36be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:10:45.216664 containerd[1474]: time="2025-01-29T11:10:45.216621412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-b-e71ed2fe96,Uid:7b9f70bee690eb7c89d058557c941efc,Namespace:kube-system,Attempt:0,} returns sandbox id \"0298116b07414285d92d425febc2c8cc9b4f5c5c3e6ec5acc888ca72e9b2205c\"" Jan 29 11:10:45.219960 containerd[1474]: time="2025-01-29T11:10:45.219923823Z" level=info msg="CreateContainer within sandbox \"0298116b07414285d92d425febc2c8cc9b4f5c5c3e6ec5acc888ca72e9b2205c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:10:45.222532 containerd[1474]: time="2025-01-29T11:10:45.222480502Z" level=info msg="CreateContainer within sandbox \"19ffc8f5850fe692071229398b84f171be4cda4344c6b1418721848dd4fd15ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e\"" Jan 29 11:10:45.224697 containerd[1474]: time="2025-01-29T11:10:45.224071406Z" level=info msg="StartContainer for \"9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e\"" Jan 29 11:10:45.238646 containerd[1474]: time="2025-01-29T11:10:45.238590028Z" level=info msg="CreateContainer within sandbox \"0298116b07414285d92d425febc2c8cc9b4f5c5c3e6ec5acc888ca72e9b2205c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd\"" Jan 29 11:10:45.239353 containerd[1474]: time="2025-01-29T11:10:45.239317519Z" level=info msg="StartContainer for \"59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd\"" Jan 29 11:10:45.243366 containerd[1474]: time="2025-01-29T11:10:45.243208899Z" level=info msg="CreateContainer within sandbox \"77f971f9e7a29b83651ae296e3eb51c0cc18c936d2fff32de2625a46eb4b36be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b27dc2db6eaddd40ffda3498158d09acce516a5bd4094d0717ba931f12c2f13\"" Jan 29 11:10:45.245207 containerd[1474]: time="2025-01-29T11:10:45.245176289Z" level=info msg="StartContainer for \"9b27dc2db6eaddd40ffda3498158d09acce516a5bd4094d0717ba931f12c2f13\"" Jan 29 11:10:45.260259 systemd[1]: Started cri-containerd-9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e.scope - libcontainer container 9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e. Jan 29 11:10:45.280159 systemd[1]: Started cri-containerd-59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd.scope - libcontainer container 59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd. Jan 29 11:10:45.293359 systemd[1]: Started cri-containerd-9b27dc2db6eaddd40ffda3498158d09acce516a5bd4094d0717ba931f12c2f13.scope - libcontainer container 9b27dc2db6eaddd40ffda3498158d09acce516a5bd4094d0717ba931f12c2f13. Jan 29 11:10:45.345743 containerd[1474]: time="2025-01-29T11:10:45.344523490Z" level=info msg="StartContainer for \"9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e\" returns successfully" Jan 29 11:10:45.352684 containerd[1474]: time="2025-01-29T11:10:45.352244928Z" level=info msg="StartContainer for \"59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd\" returns successfully" Jan 29 11:10:45.372610 kubelet[2387]: E0129 11:10:45.372563 2387 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.151.137:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-b-e71ed2fe96?timeout=10s\": dial tcp 138.199.151.137:6443: connect: connection refused" interval="1.6s" Jan 29 11:10:45.375476 containerd[1474]: time="2025-01-29T11:10:45.375335482Z" level=info msg="StartContainer for \"9b27dc2db6eaddd40ffda3498158d09acce516a5bd4094d0717ba931f12c2f13\" returns successfully" Jan 29 11:10:45.569979 kubelet[2387]: I0129 11:10:45.567379 2387 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:47.686567 kubelet[2387]: E0129 11:10:47.686507 2387 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-b-e71ed2fe96\" not found" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:47.734283 kubelet[2387]: I0129 11:10:47.733955 2387 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:47.782263 kubelet[2387]: E0129 11:10:47.782151 2387 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-b-e71ed2fe96.181f2558ab8cbb74 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-b-e71ed2fe96,UID:ci-4152-2-0-b-e71ed2fe96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-b-e71ed2fe96,},FirstTimestamp:2025-01-29 11:10:43.953777524 +0000 UTC m=+0.922775467,LastTimestamp:2025-01-29 11:10:43.953777524 +0000 UTC m=+0.922775467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-b-e71ed2fe96,}" Jan 29 11:10:47.848392 kubelet[2387]: E0129 11:10:47.848096 2387 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-b-e71ed2fe96.181f2558ac71a282 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-b-e71ed2fe96,UID:ci-4152-2-0-b-e71ed2fe96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-b-e71ed2fe96,},FirstTimestamp:2025-01-29 11:10:43.968778882 +0000 UTC m=+0.937776825,LastTimestamp:2025-01-29 11:10:43.968778882 +0000 UTC m=+0.937776825,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-b-e71ed2fe96,}" Jan 29 11:10:47.906007 kubelet[2387]: E0129 11:10:47.905698 2387 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-b-e71ed2fe96.181f2558ae5e0e03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-b-e71ed2fe96,UID:ci-4152-2-0-b-e71ed2fe96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4152-2-0-b-e71ed2fe96 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-b-e71ed2fe96,},FirstTimestamp:2025-01-29 11:10:44.001050115 +0000 UTC m=+0.970048018,LastTimestamp:2025-01-29 11:10:44.001050115 +0000 UTC m=+0.970048018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-b-e71ed2fe96,}" Jan 29 11:10:47.956826 kubelet[2387]: I0129 11:10:47.956238 2387 apiserver.go:52] "Watching apiserver" Jan 29 11:10:47.965926 kubelet[2387]: I0129 11:10:47.965816 2387 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:10:49.979359 systemd[1]: Reloading requested from client PID 2666 ('systemctl') (unit session-7.scope)... Jan 29 11:10:49.979399 systemd[1]: Reloading... Jan 29 11:10:50.089921 zram_generator::config[2718]: No configuration found. Jan 29 11:10:50.188430 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:10:50.270864 systemd[1]: Reloading finished in 290 ms. Jan 29 11:10:50.312810 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:50.326704 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:10:50.327203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:50.327285 systemd[1]: kubelet.service: Consumed 1.377s CPU time, 116.3M memory peak, 0B memory swap peak. Jan 29 11:10:50.336788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:10:50.458493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:10:50.472325 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:10:50.530822 kubelet[2751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:50.532913 kubelet[2751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:10:50.532913 kubelet[2751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:10:50.532913 kubelet[2751]: I0129 11:10:50.531275 2751 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:10:50.546166 kubelet[2751]: I0129 11:10:50.546109 2751 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:10:50.546166 kubelet[2751]: I0129 11:10:50.546156 2751 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:10:50.546481 kubelet[2751]: I0129 11:10:50.546461 2751 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:10:50.548330 kubelet[2751]: I0129 11:10:50.548296 2751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:10:50.551017 kubelet[2751]: I0129 11:10:50.550792 2751 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:10:50.554537 kubelet[2751]: E0129 11:10:50.554500 2751 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:10:50.554537 kubelet[2751]: I0129 11:10:50.554538 2751 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:10:50.557247 kubelet[2751]: I0129 11:10:50.557212 2751 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:10:50.557389 kubelet[2751]: I0129 11:10:50.557355 2751 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:10:50.557526 kubelet[2751]: I0129 11:10:50.557480 2751 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:10:50.557702 kubelet[2751]: I0129 11:10:50.557506 2751 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-b-e71ed2fe96","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:10:50.557776 kubelet[2751]: I0129 11:10:50.557706 2751 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:10:50.557776 kubelet[2751]: I0129 11:10:50.557715 2751 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:10:50.557776 kubelet[2751]: I0129 11:10:50.557744 2751 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:50.557895 kubelet[2751]: I0129 11:10:50.557863 2751 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:10:50.558506 kubelet[2751]: I0129 11:10:50.558482 2751 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:10:50.558591 kubelet[2751]: I0129 11:10:50.558524 2751 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:10:50.558591 kubelet[2751]: I0129 11:10:50.558547 2751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:10:50.562205 kubelet[2751]: I0129 11:10:50.562081 2751 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:10:50.562843 kubelet[2751]: I0129 11:10:50.562633 2751 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:10:50.563124 kubelet[2751]: I0129 11:10:50.563102 2751 server.go:1269] "Started kubelet" Jan 29 11:10:50.569431 kubelet[2751]: I0129 11:10:50.569394 2751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:10:50.571940 kubelet[2751]: I0129 11:10:50.571858 2751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:10:50.575901 kubelet[2751]: I0129 11:10:50.575809 2751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:10:50.576243 kubelet[2751]: I0129 11:10:50.576182 2751 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:10:50.576491 kubelet[2751]: I0129 11:10:50.576448 2751 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:10:50.586697 kubelet[2751]: I0129 11:10:50.586650 2751 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:10:50.586840 kubelet[2751]: E0129 11:10:50.586819 2751 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4152-2-0-b-e71ed2fe96\" not found" Jan 29 11:10:50.587976 kubelet[2751]: I0129 11:10:50.587941 2751 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:10:50.590830 kubelet[2751]: I0129 11:10:50.590790 2751 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:10:50.590994 kubelet[2751]: I0129 11:10:50.590979 2751 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:10:50.600742 kubelet[2751]: I0129 11:10:50.600704 2751 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:10:50.600990 kubelet[2751]: I0129 11:10:50.600807 2751 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:10:50.610206 kubelet[2751]: I0129 11:10:50.610110 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:10:50.611306 kubelet[2751]: I0129 11:10:50.611268 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:10:50.611306 kubelet[2751]: I0129 11:10:50.611300 2751 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:10:50.611492 kubelet[2751]: I0129 11:10:50.611320 2751 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:10:50.611492 kubelet[2751]: E0129 11:10:50.611363 2751 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:10:50.619567 kubelet[2751]: I0129 11:10:50.619405 2751 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:10:50.679269 kubelet[2751]: I0129 11:10:50.679215 2751 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:10:50.680111 kubelet[2751]: I0129 11:10:50.679434 2751 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:10:50.680111 kubelet[2751]: I0129 11:10:50.679473 2751 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:10:50.680111 kubelet[2751]: I0129 11:10:50.679800 2751 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:10:50.680111 kubelet[2751]: I0129 11:10:50.679819 2751 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:10:50.680111 kubelet[2751]: I0129 11:10:50.679970 2751 policy_none.go:49] "None policy: Start" Jan 29 11:10:50.680669 kubelet[2751]: I0129 11:10:50.680650 2751 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:10:50.680669 kubelet[2751]: I0129 11:10:50.680674 2751 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:10:50.680966 kubelet[2751]: I0129 11:10:50.680857 2751 state_mem.go:75] "Updated machine memory state" Jan 29 11:10:50.685822 kubelet[2751]: I0129 11:10:50.685362 2751 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:10:50.685822 kubelet[2751]: I0129 11:10:50.685540 2751 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:10:50.685822 kubelet[2751]: I0129 11:10:50.685550 2751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:10:50.685822 kubelet[2751]: I0129 11:10:50.685738 2751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:10:50.723293 kubelet[2751]: E0129 11:10:50.723236 2751 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-b-e71ed2fe96\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.797011 kubelet[2751]: I0129 11:10:50.795926 2751 kubelet_node_status.go:72] "Attempting to register node" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.807977 kubelet[2751]: I0129 11:10:50.807939 2751 kubelet_node_status.go:111] "Node was previously registered" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.808169 kubelet[2751]: I0129 11:10:50.808075 2751 kubelet_node_status.go:75] "Successfully registered node" node="ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.891725 kubelet[2751]: I0129 11:10:50.891557 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/345b4fcf7d0b600eda1396915ca8fd57-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-b-e71ed2fe96\" (UID: \"345b4fcf7d0b600eda1396915ca8fd57\") " pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.891725 kubelet[2751]: I0129 11:10:50.891633 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/345b4fcf7d0b600eda1396915ca8fd57-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-b-e71ed2fe96\" (UID: \"345b4fcf7d0b600eda1396915ca8fd57\") " pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.891725 kubelet[2751]: I0129 11:10:50.891678 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/345b4fcf7d0b600eda1396915ca8fd57-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-b-e71ed2fe96\" (UID: \"345b4fcf7d0b600eda1396915ca8fd57\") " pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.892255 kubelet[2751]: I0129 11:10:50.891750 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.892255 kubelet[2751]: I0129 11:10:50.891827 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.892255 kubelet[2751]: I0129 11:10:50.891874 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.892255 kubelet[2751]: I0129 11:10:50.891934 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.892255 kubelet[2751]: I0129 11:10:50.891974 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5be7abbaa13e091e11ab0c7696d62446-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-b-e71ed2fe96\" (UID: \"5be7abbaa13e091e11ab0c7696d62446\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.892518 kubelet[2751]: I0129 11:10:50.892020 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b9f70bee690eb7c89d058557c941efc-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-b-e71ed2fe96\" (UID: \"7b9f70bee690eb7c89d058557c941efc\") " pod="kube-system/kube-scheduler-ci-4152-2-0-b-e71ed2fe96" Jan 29 11:10:50.977416 sudo[2784]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:10:50.977723 sudo[2784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:10:51.411836 sudo[2784]: pam_unix(sudo:session): session closed for user root Jan 29 11:10:51.561468 kubelet[2751]: I0129 11:10:51.561426 2751 apiserver.go:52] "Watching apiserver" Jan 29 11:10:51.591452 kubelet[2751]: I0129 11:10:51.591409 2751 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:10:51.723520 kubelet[2751]: I0129 11:10:51.722497 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-b-e71ed2fe96" podStartSLOduration=1.722478723 podStartE2EDuration="1.722478723s" podCreationTimestamp="2025-01-29 11:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:51.703493982 +0000 UTC m=+1.225496191" watchObservedRunningTime="2025-01-29 11:10:51.722478723 +0000 UTC m=+1.244480932" Jan 29 11:10:51.723520 kubelet[2751]: I0129 11:10:51.722629 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-b-e71ed2fe96" podStartSLOduration=1.722624725 podStartE2EDuration="1.722624725s" podCreationTimestamp="2025-01-29 11:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:51.720009809 +0000 UTC m=+1.242012018" watchObservedRunningTime="2025-01-29 11:10:51.722624725 +0000 UTC m=+1.244626894" Jan 29 11:10:51.765752 kubelet[2751]: I0129 11:10:51.764288 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" podStartSLOduration=3.764269618 podStartE2EDuration="3.764269618s" podCreationTimestamp="2025-01-29 11:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:51.736765999 +0000 UTC m=+1.258768208" watchObservedRunningTime="2025-01-29 11:10:51.764269618 +0000 UTC m=+1.286271827" Jan 29 11:10:53.148010 sudo[1862]: pam_unix(sudo:session): session closed for user root Jan 29 11:10:53.308071 sshd[1861]: Connection closed by 147.75.109.163 port 45922 Jan 29 11:10:53.309099 sshd-session[1859]: pam_unix(sshd:session): session closed for user core Jan 29 11:10:53.315115 systemd[1]: sshd@6-138.199.151.137:22-147.75.109.163:45922.service: Deactivated successfully. Jan 29 11:10:53.317634 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:10:53.317998 systemd[1]: session-7.scope: Consumed 7.272s CPU time, 153.9M memory peak, 0B memory swap peak. Jan 29 11:10:53.320496 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:10:53.322546 systemd-logind[1452]: Removed session 7. Jan 29 11:10:56.250073 kubelet[2751]: I0129 11:10:56.249869 2751 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:10:56.251242 containerd[1474]: time="2025-01-29T11:10:56.251005223Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:10:56.251685 kubelet[2751]: I0129 11:10:56.251427 2751 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:10:56.965701 systemd[1]: Created slice kubepods-besteffort-pod9ee88db6_ff78_4c6e_b946_3ed83d6c9a42.slice - libcontainer container kubepods-besteffort-pod9ee88db6_ff78_4c6e_b946_3ed83d6c9a42.slice. Jan 29 11:10:56.982487 systemd[1]: Created slice kubepods-burstable-pod6f224cc2_8b4b_4085_9f27_180208d3eb0e.slice - libcontainer container kubepods-burstable-pod6f224cc2_8b4b_4085_9f27_180208d3eb0e.slice. Jan 29 11:10:57.037936 kubelet[2751]: I0129 11:10:57.037409 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm7zs\" (UniqueName: \"kubernetes.io/projected/9ee88db6-ff78-4c6e-b946-3ed83d6c9a42-kube-api-access-fm7zs\") pod \"kube-proxy-dt4n9\" (UID: \"9ee88db6-ff78-4c6e-b946-3ed83d6c9a42\") " pod="kube-system/kube-proxy-dt4n9" Jan 29 11:10:57.038289 kubelet[2751]: I0129 11:10:57.038018 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-lib-modules\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038289 kubelet[2751]: I0129 11:10:57.038084 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-cgroup\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038289 kubelet[2751]: I0129 11:10:57.038126 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-config-path\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038289 kubelet[2751]: I0129 11:10:57.038163 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-bpf-maps\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038289 kubelet[2751]: I0129 11:10:57.038233 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-hostproc\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038289 kubelet[2751]: I0129 11:10:57.038284 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ee88db6-ff78-4c6e-b946-3ed83d6c9a42-lib-modules\") pod \"kube-proxy-dt4n9\" (UID: \"9ee88db6-ff78-4c6e-b946-3ed83d6c9a42\") " pod="kube-system/kube-proxy-dt4n9" Jan 29 11:10:57.038589 kubelet[2751]: I0129 11:10:57.038321 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cni-path\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038589 kubelet[2751]: I0129 11:10:57.038371 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-etc-cni-netd\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038589 kubelet[2751]: I0129 11:10:57.038410 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f224cc2-8b4b-4085-9f27-180208d3eb0e-clustermesh-secrets\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038589 kubelet[2751]: I0129 11:10:57.038445 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-host-proc-sys-kernel\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038589 kubelet[2751]: I0129 11:10:57.038483 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ee88db6-ff78-4c6e-b946-3ed83d6c9a42-xtables-lock\") pod \"kube-proxy-dt4n9\" (UID: \"9ee88db6-ff78-4c6e-b946-3ed83d6c9a42\") " pod="kube-system/kube-proxy-dt4n9" Jan 29 11:10:57.038589 kubelet[2751]: I0129 11:10:57.038517 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-host-proc-sys-net\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038718 kubelet[2751]: I0129 11:10:57.038553 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df2rp\" (UniqueName: \"kubernetes.io/projected/6f224cc2-8b4b-4085-9f27-180208d3eb0e-kube-api-access-df2rp\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038718 kubelet[2751]: I0129 11:10:57.038600 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-run\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038718 kubelet[2751]: I0129 11:10:57.038636 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f224cc2-8b4b-4085-9f27-180208d3eb0e-hubble-tls\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.038718 kubelet[2751]: I0129 11:10:57.038696 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ee88db6-ff78-4c6e-b946-3ed83d6c9a42-kube-proxy\") pod \"kube-proxy-dt4n9\" (UID: \"9ee88db6-ff78-4c6e-b946-3ed83d6c9a42\") " pod="kube-system/kube-proxy-dt4n9" Jan 29 11:10:57.038837 kubelet[2751]: I0129 11:10:57.038733 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-xtables-lock\") pod \"cilium-kvf4q\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " pod="kube-system/cilium-kvf4q" Jan 29 11:10:57.278910 containerd[1474]: time="2025-01-29T11:10:57.278105472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dt4n9,Uid:9ee88db6-ff78-4c6e-b946-3ed83d6c9a42,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:57.289063 containerd[1474]: time="2025-01-29T11:10:57.289022728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvf4q,Uid:6f224cc2-8b4b-4085-9f27-180208d3eb0e,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:57.330497 containerd[1474]: time="2025-01-29T11:10:57.329748877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:57.332719 containerd[1474]: time="2025-01-29T11:10:57.330466006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:57.332719 containerd[1474]: time="2025-01-29T11:10:57.332325949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:57.335090 containerd[1474]: time="2025-01-29T11:10:57.334028090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:57.352501 containerd[1474]: time="2025-01-29T11:10:57.352157277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:57.352501 containerd[1474]: time="2025-01-29T11:10:57.352248278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:57.352501 containerd[1474]: time="2025-01-29T11:10:57.352273398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:57.352501 containerd[1474]: time="2025-01-29T11:10:57.352366879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:57.365110 systemd[1]: Started cri-containerd-22f43c7d2b37affa0dcd4bd61ce285110549b3031dfb708bdf699201528c1c9b.scope - libcontainer container 22f43c7d2b37affa0dcd4bd61ce285110549b3031dfb708bdf699201528c1c9b. Jan 29 11:10:57.375940 systemd[1]: Created slice kubepods-besteffort-pod57b68f2e_7e28_4105_b1b9_fe9478579790.slice - libcontainer container kubepods-besteffort-pod57b68f2e_7e28_4105_b1b9_fe9478579790.slice. Jan 29 11:10:57.396072 systemd[1]: Started cri-containerd-3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad.scope - libcontainer container 3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad. Jan 29 11:10:57.441750 kubelet[2751]: I0129 11:10:57.441703 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57b68f2e-7e28-4105-b1b9-fe9478579790-cilium-config-path\") pod \"cilium-operator-5d85765b45-lxjg9\" (UID: \"57b68f2e-7e28-4105-b1b9-fe9478579790\") " pod="kube-system/cilium-operator-5d85765b45-lxjg9" Jan 29 11:10:57.441750 kubelet[2751]: I0129 11:10:57.441751 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndwl\" (UniqueName: \"kubernetes.io/projected/57b68f2e-7e28-4105-b1b9-fe9478579790-kube-api-access-tndwl\") pod \"cilium-operator-5d85765b45-lxjg9\" (UID: \"57b68f2e-7e28-4105-b1b9-fe9478579790\") " pod="kube-system/cilium-operator-5d85765b45-lxjg9" Jan 29 11:10:57.454232 containerd[1474]: time="2025-01-29T11:10:57.454172870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kvf4q,Uid:6f224cc2-8b4b-4085-9f27-180208d3eb0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\"" Jan 29 11:10:57.458538 containerd[1474]: time="2025-01-29T11:10:57.458502244Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:10:57.487514 containerd[1474]: time="2025-01-29T11:10:57.487390485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dt4n9,Uid:9ee88db6-ff78-4c6e-b946-3ed83d6c9a42,Namespace:kube-system,Attempt:0,} returns sandbox id \"22f43c7d2b37affa0dcd4bd61ce285110549b3031dfb708bdf699201528c1c9b\"" Jan 29 11:10:57.492089 containerd[1474]: time="2025-01-29T11:10:57.491944902Z" level=info msg="CreateContainer within sandbox \"22f43c7d2b37affa0dcd4bd61ce285110549b3031dfb708bdf699201528c1c9b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:10:57.511318 containerd[1474]: time="2025-01-29T11:10:57.511266663Z" level=info msg="CreateContainer within sandbox \"22f43c7d2b37affa0dcd4bd61ce285110549b3031dfb708bdf699201528c1c9b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bc2f9412316f9b6822736ee68c404ecf4a502a900a51da0b8d58e4c8eb9381d8\"" Jan 29 11:10:57.514522 containerd[1474]: time="2025-01-29T11:10:57.514487623Z" level=info msg="StartContainer for \"bc2f9412316f9b6822736ee68c404ecf4a502a900a51da0b8d58e4c8eb9381d8\"" Jan 29 11:10:57.548074 systemd[1]: Started cri-containerd-bc2f9412316f9b6822736ee68c404ecf4a502a900a51da0b8d58e4c8eb9381d8.scope - libcontainer container bc2f9412316f9b6822736ee68c404ecf4a502a900a51da0b8d58e4c8eb9381d8. Jan 29 11:10:57.591255 containerd[1474]: time="2025-01-29T11:10:57.591181220Z" level=info msg="StartContainer for \"bc2f9412316f9b6822736ee68c404ecf4a502a900a51da0b8d58e4c8eb9381d8\" returns successfully" Jan 29 11:10:57.682054 containerd[1474]: time="2025-01-29T11:10:57.681935553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lxjg9,Uid:57b68f2e-7e28-4105-b1b9-fe9478579790,Namespace:kube-system,Attempt:0,}" Jan 29 11:10:57.715411 containerd[1474]: time="2025-01-29T11:10:57.714544640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:10:57.715411 containerd[1474]: time="2025-01-29T11:10:57.714798524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:10:57.715411 containerd[1474]: time="2025-01-29T11:10:57.714824804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:57.716952 containerd[1474]: time="2025-01-29T11:10:57.716026859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:10:57.739679 systemd[1]: Started cri-containerd-ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8.scope - libcontainer container ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8. Jan 29 11:10:57.782589 containerd[1474]: time="2025-01-29T11:10:57.782516329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lxjg9,Uid:57b68f2e-7e28-4105-b1b9-fe9478579790,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\"" Jan 29 11:11:00.964597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656958812.mount: Deactivated successfully. Jan 29 11:11:02.368233 containerd[1474]: time="2025-01-29T11:11:02.368159120Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:02.369336 containerd[1474]: time="2025-01-29T11:11:02.369261333Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:11:02.370195 containerd[1474]: time="2025-01-29T11:11:02.370146503Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:02.374348 containerd[1474]: time="2025-01-29T11:11:02.374176630Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.915472263s" Jan 29 11:11:02.374348 containerd[1474]: time="2025-01-29T11:11:02.374273271Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:11:02.376304 containerd[1474]: time="2025-01-29T11:11:02.376138052Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:11:02.379586 containerd[1474]: time="2025-01-29T11:11:02.378984925Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:11:02.399471 containerd[1474]: time="2025-01-29T11:11:02.399372762Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\"" Jan 29 11:11:02.400997 containerd[1474]: time="2025-01-29T11:11:02.400925460Z" level=info msg="StartContainer for \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\"" Jan 29 11:11:02.440285 systemd[1]: Started cri-containerd-1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395.scope - libcontainer container 1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395. Jan 29 11:11:02.474463 containerd[1474]: time="2025-01-29T11:11:02.474399991Z" level=info msg="StartContainer for \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\" returns successfully" Jan 29 11:11:02.490281 systemd[1]: cri-containerd-1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395.scope: Deactivated successfully. Jan 29 11:11:02.715802 containerd[1474]: time="2025-01-29T11:11:02.715725468Z" level=info msg="shim disconnected" id=1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395 namespace=k8s.io Jan 29 11:11:02.716639 containerd[1474]: time="2025-01-29T11:11:02.716078032Z" level=warning msg="cleaning up after shim disconnected" id=1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395 namespace=k8s.io Jan 29 11:11:02.716639 containerd[1474]: time="2025-01-29T11:11:02.716097832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:02.728966 kubelet[2751]: I0129 11:11:02.728470 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dt4n9" podStartSLOduration=6.728450895 podStartE2EDuration="6.728450895s" podCreationTimestamp="2025-01-29 11:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:10:57.70492728 +0000 UTC m=+7.226929449" watchObservedRunningTime="2025-01-29 11:11:02.728450895 +0000 UTC m=+12.250453104" Jan 29 11:11:03.392542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395-rootfs.mount: Deactivated successfully. Jan 29 11:11:03.712697 containerd[1474]: time="2025-01-29T11:11:03.712453664Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:11:03.734569 containerd[1474]: time="2025-01-29T11:11:03.734490716Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\"" Jan 29 11:11:03.736528 containerd[1474]: time="2025-01-29T11:11:03.736496139Z" level=info msg="StartContainer for \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\"" Jan 29 11:11:03.773084 systemd[1]: Started cri-containerd-2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4.scope - libcontainer container 2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4. Jan 29 11:11:03.810777 containerd[1474]: time="2025-01-29T11:11:03.810373663Z" level=info msg="StartContainer for \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\" returns successfully" Jan 29 11:11:03.826267 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:11:03.827007 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:11:03.827081 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:11:03.833365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:11:03.833567 systemd[1]: cri-containerd-2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4.scope: Deactivated successfully. Jan 29 11:11:03.854076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:11:03.865682 containerd[1474]: time="2025-01-29T11:11:03.865619974Z" level=info msg="shim disconnected" id=2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4 namespace=k8s.io Jan 29 11:11:03.865682 containerd[1474]: time="2025-01-29T11:11:03.865676415Z" level=warning msg="cleaning up after shim disconnected" id=2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4 namespace=k8s.io Jan 29 11:11:03.865682 containerd[1474]: time="2025-01-29T11:11:03.865686335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:04.392195 systemd[1]: run-containerd-runc-k8s.io-2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4-runc.5E0e3q.mount: Deactivated successfully. Jan 29 11:11:04.392318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4-rootfs.mount: Deactivated successfully. Jan 29 11:11:04.559993 containerd[1474]: time="2025-01-29T11:11:04.559871339Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:04.561507 containerd[1474]: time="2025-01-29T11:11:04.561431237Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:11:04.562197 containerd[1474]: time="2025-01-29T11:11:04.562135125Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:11:04.564855 containerd[1474]: time="2025-01-29T11:11:04.563792863Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.18759241s" Jan 29 11:11:04.564855 containerd[1474]: time="2025-01-29T11:11:04.563831064Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:11:04.567795 containerd[1474]: time="2025-01-29T11:11:04.567568546Z" level=info msg="CreateContainer within sandbox \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:11:04.584749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153805569.mount: Deactivated successfully. Jan 29 11:11:04.591350 containerd[1474]: time="2025-01-29T11:11:04.591282733Z" level=info msg="CreateContainer within sandbox \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\"" Jan 29 11:11:04.592421 containerd[1474]: time="2025-01-29T11:11:04.592221864Z" level=info msg="StartContainer for \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\"" Jan 29 11:11:04.639284 systemd[1]: Started cri-containerd-6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251.scope - libcontainer container 6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251. Jan 29 11:11:04.671115 containerd[1474]: time="2025-01-29T11:11:04.670827189Z" level=info msg="StartContainer for \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\" returns successfully" Jan 29 11:11:04.719506 containerd[1474]: time="2025-01-29T11:11:04.719456538Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:11:04.741009 containerd[1474]: time="2025-01-29T11:11:04.740958180Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\"" Jan 29 11:11:04.742036 containerd[1474]: time="2025-01-29T11:11:04.741996672Z" level=info msg="StartContainer for \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\"" Jan 29 11:11:04.778433 systemd[1]: Started cri-containerd-4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec.scope - libcontainer container 4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec. Jan 29 11:11:04.827355 containerd[1474]: time="2025-01-29T11:11:04.827295073Z" level=info msg="StartContainer for \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\" returns successfully" Jan 29 11:11:04.832711 systemd[1]: cri-containerd-4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec.scope: Deactivated successfully. Jan 29 11:11:04.914035 containerd[1474]: time="2025-01-29T11:11:04.913730407Z" level=info msg="shim disconnected" id=4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec namespace=k8s.io Jan 29 11:11:04.914035 containerd[1474]: time="2025-01-29T11:11:04.914037690Z" level=warning msg="cleaning up after shim disconnected" id=4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec namespace=k8s.io Jan 29 11:11:04.914294 containerd[1474]: time="2025-01-29T11:11:04.914051810Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:05.727925 containerd[1474]: time="2025-01-29T11:11:05.727804910Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:11:05.750820 containerd[1474]: time="2025-01-29T11:11:05.750752125Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\"" Jan 29 11:11:05.752534 containerd[1474]: time="2025-01-29T11:11:05.751517574Z" level=info msg="StartContainer for \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\"" Jan 29 11:11:05.761781 kubelet[2751]: I0129 11:11:05.761272 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lxjg9" podStartSLOduration=1.98078448 podStartE2EDuration="8.761251442s" podCreationTimestamp="2025-01-29 11:10:57 +0000 UTC" firstStartedPulling="2025-01-29 11:10:57.785074601 +0000 UTC m=+7.307076810" lastFinishedPulling="2025-01-29 11:11:04.565541563 +0000 UTC m=+14.087543772" observedRunningTime="2025-01-29 11:11:04.765934141 +0000 UTC m=+14.287936390" watchObservedRunningTime="2025-01-29 11:11:05.761251442 +0000 UTC m=+15.283253651" Jan 29 11:11:05.788187 systemd[1]: Started cri-containerd-76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4.scope - libcontainer container 76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4. Jan 29 11:11:05.820575 systemd[1]: cri-containerd-76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4.scope: Deactivated successfully. Jan 29 11:11:05.824776 containerd[1474]: time="2025-01-29T11:11:05.824724788Z" level=info msg="StartContainer for \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\" returns successfully" Jan 29 11:11:05.853939 containerd[1474]: time="2025-01-29T11:11:05.853724270Z" level=info msg="shim disconnected" id=76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4 namespace=k8s.io Jan 29 11:11:05.853939 containerd[1474]: time="2025-01-29T11:11:05.853796591Z" level=warning msg="cleaning up after shim disconnected" id=76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4 namespace=k8s.io Jan 29 11:11:05.853939 containerd[1474]: time="2025-01-29T11:11:05.853805751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:11:06.394663 systemd[1]: run-containerd-runc-k8s.io-76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4-runc.UCCTwh.mount: Deactivated successfully. Jan 29 11:11:06.394796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4-rootfs.mount: Deactivated successfully. Jan 29 11:11:06.736873 containerd[1474]: time="2025-01-29T11:11:06.736227372Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:11:06.761847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042675636.mount: Deactivated successfully. Jan 29 11:11:06.766047 containerd[1474]: time="2025-01-29T11:11:06.766000179Z" level=info msg="CreateContainer within sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\"" Jan 29 11:11:06.766696 containerd[1474]: time="2025-01-29T11:11:06.766547505Z" level=info msg="StartContainer for \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\"" Jan 29 11:11:06.798125 systemd[1]: Started cri-containerd-b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301.scope - libcontainer container b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301. Jan 29 11:11:06.835755 containerd[1474]: time="2025-01-29T11:11:06.835660543Z" level=info msg="StartContainer for \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\" returns successfully" Jan 29 11:11:06.993062 kubelet[2751]: I0129 11:11:06.992951 2751 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:11:07.049125 systemd[1]: Created slice kubepods-burstable-podf53165f6_06fd_4922_a608_d8be5ebba892.slice - libcontainer container kubepods-burstable-podf53165f6_06fd_4922_a608_d8be5ebba892.slice. Jan 29 11:11:07.060066 systemd[1]: Created slice kubepods-burstable-podd9d787bd_0a24_4f04_8751_5a6fc46a691a.slice - libcontainer container kubepods-burstable-podd9d787bd_0a24_4f04_8751_5a6fc46a691a.slice. Jan 29 11:11:07.114007 kubelet[2751]: I0129 11:11:07.113946 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f53165f6-06fd-4922-a608-d8be5ebba892-config-volume\") pod \"coredns-6f6b679f8f-gxm42\" (UID: \"f53165f6-06fd-4922-a608-d8be5ebba892\") " pod="kube-system/coredns-6f6b679f8f-gxm42" Jan 29 11:11:07.114149 kubelet[2751]: I0129 11:11:07.114017 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjfnb\" (UniqueName: \"kubernetes.io/projected/d9d787bd-0a24-4f04-8751-5a6fc46a691a-kube-api-access-wjfnb\") pod \"coredns-6f6b679f8f-xhgqg\" (UID: \"d9d787bd-0a24-4f04-8751-5a6fc46a691a\") " pod="kube-system/coredns-6f6b679f8f-xhgqg" Jan 29 11:11:07.114149 kubelet[2751]: I0129 11:11:07.114050 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9d787bd-0a24-4f04-8751-5a6fc46a691a-config-volume\") pod \"coredns-6f6b679f8f-xhgqg\" (UID: \"d9d787bd-0a24-4f04-8751-5a6fc46a691a\") " pod="kube-system/coredns-6f6b679f8f-xhgqg" Jan 29 11:11:07.114149 kubelet[2751]: I0129 11:11:07.114078 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6prmm\" (UniqueName: \"kubernetes.io/projected/f53165f6-06fd-4922-a608-d8be5ebba892-kube-api-access-6prmm\") pod \"coredns-6f6b679f8f-gxm42\" (UID: \"f53165f6-06fd-4922-a608-d8be5ebba892\") " pod="kube-system/coredns-6f6b679f8f-gxm42" Jan 29 11:11:07.356119 containerd[1474]: time="2025-01-29T11:11:07.355322673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gxm42,Uid:f53165f6-06fd-4922-a608-d8be5ebba892,Namespace:kube-system,Attempt:0,}" Jan 29 11:11:07.364763 containerd[1474]: time="2025-01-29T11:11:07.364688614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xhgqg,Uid:d9d787bd-0a24-4f04-8751-5a6fc46a691a,Namespace:kube-system,Attempt:0,}" Jan 29 11:11:09.067143 systemd-networkd[1370]: cilium_host: Link UP Jan 29 11:11:09.068433 systemd-networkd[1370]: cilium_net: Link UP Jan 29 11:11:09.069842 systemd-networkd[1370]: cilium_net: Gained carrier Jan 29 11:11:09.070065 systemd-networkd[1370]: cilium_host: Gained carrier Jan 29 11:11:09.183310 systemd-networkd[1370]: cilium_vxlan: Link UP Jan 29 11:11:09.183319 systemd-networkd[1370]: cilium_vxlan: Gained carrier Jan 29 11:11:09.484230 kernel: NET: Registered PF_ALG protocol family Jan 29 11:11:09.820219 systemd-networkd[1370]: cilium_net: Gained IPv6LL Jan 29 11:11:09.947665 systemd-networkd[1370]: cilium_host: Gained IPv6LL Jan 29 11:11:10.208025 systemd-networkd[1370]: lxc_health: Link UP Jan 29 11:11:10.208446 systemd-networkd[1370]: lxc_health: Gained carrier Jan 29 11:11:10.331226 systemd-networkd[1370]: cilium_vxlan: Gained IPv6LL Jan 29 11:11:10.429731 systemd-networkd[1370]: lxc5fd38b00810e: Link UP Jan 29 11:11:10.435358 kernel: eth0: renamed from tmpd0d0f Jan 29 11:11:10.441676 systemd-networkd[1370]: lxc5fd38b00810e: Gained carrier Jan 29 11:11:10.449737 systemd-networkd[1370]: lxcd3352e93b42f: Link UP Jan 29 11:11:10.454793 kernel: eth0: renamed from tmp205d3 Jan 29 11:11:10.460138 systemd-networkd[1370]: lxcd3352e93b42f: Gained carrier Jan 29 11:11:11.228091 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jan 29 11:11:11.315150 kubelet[2751]: I0129 11:11:11.314193 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kvf4q" podStartSLOduration=10.396451142 podStartE2EDuration="15.314167392s" podCreationTimestamp="2025-01-29 11:10:56 +0000 UTC" firstStartedPulling="2025-01-29 11:10:57.458023358 +0000 UTC m=+6.980025567" lastFinishedPulling="2025-01-29 11:11:02.375739608 +0000 UTC m=+11.897741817" observedRunningTime="2025-01-29 11:11:07.762824604 +0000 UTC m=+17.284826853" watchObservedRunningTime="2025-01-29 11:11:11.314167392 +0000 UTC m=+20.836169641" Jan 29 11:11:11.739746 systemd-networkd[1370]: lxcd3352e93b42f: Gained IPv6LL Jan 29 11:11:11.803027 systemd-networkd[1370]: lxc5fd38b00810e: Gained IPv6LL Jan 29 11:11:14.548217 containerd[1474]: time="2025-01-29T11:11:14.547059645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:11:14.548217 containerd[1474]: time="2025-01-29T11:11:14.547158206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:11:14.548217 containerd[1474]: time="2025-01-29T11:11:14.547179446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:14.548217 containerd[1474]: time="2025-01-29T11:11:14.547482969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:14.569499 containerd[1474]: time="2025-01-29T11:11:14.568784181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:11:14.569499 containerd[1474]: time="2025-01-29T11:11:14.568849582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:11:14.569499 containerd[1474]: time="2025-01-29T11:11:14.568865422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:14.571544 containerd[1474]: time="2025-01-29T11:11:14.571029763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:11:14.577104 systemd[1]: Started cri-containerd-205d313d51746a0fee593bd2717832ec36eb3978fd05ac78e61063396da76f26.scope - libcontainer container 205d313d51746a0fee593bd2717832ec36eb3978fd05ac78e61063396da76f26. Jan 29 11:11:14.610124 systemd[1]: Started cri-containerd-d0d0fd8dc42245d0c72a32ddc9871ea8ab71d73d8016d12d230795485fffd747.scope - libcontainer container d0d0fd8dc42245d0c72a32ddc9871ea8ab71d73d8016d12d230795485fffd747. Jan 29 11:11:14.647433 containerd[1474]: time="2025-01-29T11:11:14.647341322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gxm42,Uid:f53165f6-06fd-4922-a608-d8be5ebba892,Namespace:kube-system,Attempt:0,} returns sandbox id \"205d313d51746a0fee593bd2717832ec36eb3978fd05ac78e61063396da76f26\"" Jan 29 11:11:14.653321 containerd[1474]: time="2025-01-29T11:11:14.653211581Z" level=info msg="CreateContainer within sandbox \"205d313d51746a0fee593bd2717832ec36eb3978fd05ac78e61063396da76f26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:11:14.680449 containerd[1474]: time="2025-01-29T11:11:14.680411131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xhgqg,Uid:d9d787bd-0a24-4f04-8751-5a6fc46a691a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0d0fd8dc42245d0c72a32ddc9871ea8ab71d73d8016d12d230795485fffd747\"" Jan 29 11:11:14.683912 containerd[1474]: time="2025-01-29T11:11:14.683802125Z" level=info msg="CreateContainer within sandbox \"205d313d51746a0fee593bd2717832ec36eb3978fd05ac78e61063396da76f26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98ec609fdac90456a9a9c6336658db1ae0a449faf151c5fff098b902a1074cba\"" Jan 29 11:11:14.685269 containerd[1474]: time="2025-01-29T11:11:14.685147458Z" level=info msg="StartContainer for \"98ec609fdac90456a9a9c6336658db1ae0a449faf151c5fff098b902a1074cba\"" Jan 29 11:11:14.689842 containerd[1474]: time="2025-01-29T11:11:14.689806824Z" level=info msg="CreateContainer within sandbox \"d0d0fd8dc42245d0c72a32ddc9871ea8ab71d73d8016d12d230795485fffd747\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:11:14.706760 containerd[1474]: time="2025-01-29T11:11:14.706710472Z" level=info msg="CreateContainer within sandbox \"d0d0fd8dc42245d0c72a32ddc9871ea8ab71d73d8016d12d230795485fffd747\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"076f422083dff4591e4251d6f61c7ac6950691a971d8e8caa10fb8714816b91e\"" Jan 29 11:11:14.708182 containerd[1474]: time="2025-01-29T11:11:14.708151647Z" level=info msg="StartContainer for \"076f422083dff4591e4251d6f61c7ac6950691a971d8e8caa10fb8714816b91e\"" Jan 29 11:11:14.747715 systemd[1]: Started cri-containerd-98ec609fdac90456a9a9c6336658db1ae0a449faf151c5fff098b902a1074cba.scope - libcontainer container 98ec609fdac90456a9a9c6336658db1ae0a449faf151c5fff098b902a1074cba. Jan 29 11:11:14.763183 systemd[1]: Started cri-containerd-076f422083dff4591e4251d6f61c7ac6950691a971d8e8caa10fb8714816b91e.scope - libcontainer container 076f422083dff4591e4251d6f61c7ac6950691a971d8e8caa10fb8714816b91e. Jan 29 11:11:14.799839 containerd[1474]: time="2025-01-29T11:11:14.797512855Z" level=info msg="StartContainer for \"98ec609fdac90456a9a9c6336658db1ae0a449faf151c5fff098b902a1074cba\" returns successfully" Jan 29 11:11:14.817690 containerd[1474]: time="2025-01-29T11:11:14.817641335Z" level=info msg="StartContainer for \"076f422083dff4591e4251d6f61c7ac6950691a971d8e8caa10fb8714816b91e\" returns successfully" Jan 29 11:11:15.805269 kubelet[2751]: I0129 11:11:15.805192 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gxm42" podStartSLOduration=18.805175345 podStartE2EDuration="18.805175345s" podCreationTimestamp="2025-01-29 11:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:11:15.804521218 +0000 UTC m=+25.326523427" watchObservedRunningTime="2025-01-29 11:11:15.805175345 +0000 UTC m=+25.327177554" Jan 29 11:11:15.805767 kubelet[2751]: I0129 11:11:15.805321 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xhgqg" podStartSLOduration=18.805314426 podStartE2EDuration="18.805314426s" podCreationTimestamp="2025-01-29 11:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:11:15.792141936 +0000 UTC m=+25.314144105" watchObservedRunningTime="2025-01-29 11:11:15.805314426 +0000 UTC m=+25.327316635" Jan 29 11:15:31.992051 update_engine[1456]: I20250129 11:15:31.991940 1456 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 11:15:31.992051 update_engine[1456]: I20250129 11:15:31.992004 1456 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992287 1456 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992694 1456 omaha_request_params.cc:62] Current group set to stable Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992794 1456 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992805 1456 update_attempter.cc:643] Scheduling an action processor start. Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992821 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992848 1456 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992929 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992942 1456 omaha_request_action.cc:272] Request: Jan 29 11:15:31.995279 update_engine[1456]: Jan 29 11:15:31.995279 update_engine[1456]: Jan 29 11:15:31.995279 update_engine[1456]: Jan 29 11:15:31.995279 update_engine[1456]: Jan 29 11:15:31.995279 update_engine[1456]: Jan 29 11:15:31.995279 update_engine[1456]: Jan 29 11:15:31.995279 update_engine[1456]: Jan 29 11:15:31.995279 update_engine[1456]: Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.992949 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:15:31.995279 update_engine[1456]: I20250129 11:15:31.995024 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:15:31.995761 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 11:15:31.996029 update_engine[1456]: I20250129 11:15:31.995476 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:15:31.998738 update_engine[1456]: E20250129 11:15:31.998674 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:15:31.998852 update_engine[1456]: I20250129 11:15:31.998766 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 11:15:39.991761 systemd[1]: Started sshd@7-138.199.151.137:22-147.75.109.163:52932.service - OpenSSH per-connection server daemon (147.75.109.163:52932). Jan 29 11:15:40.994638 sshd[4171]: Accepted publickey for core from 147.75.109.163 port 52932 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:15:40.995852 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:41.000533 systemd-logind[1452]: New session 8 of user core. Jan 29 11:15:41.009247 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:15:41.764171 sshd[4173]: Connection closed by 147.75.109.163 port 52932 Jan 29 11:15:41.765266 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:41.769917 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:15:41.772333 systemd[1]: sshd@7-138.199.151.137:22-147.75.109.163:52932.service: Deactivated successfully. Jan 29 11:15:41.775214 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:15:41.776656 systemd-logind[1452]: Removed session 8. Jan 29 11:15:41.892372 update_engine[1456]: I20250129 11:15:41.892246 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:15:41.893011 update_engine[1456]: I20250129 11:15:41.892591 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:15:41.893011 update_engine[1456]: I20250129 11:15:41.892925 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:15:41.893431 update_engine[1456]: E20250129 11:15:41.893352 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:15:41.893513 update_engine[1456]: I20250129 11:15:41.893448 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 11:15:46.944649 systemd[1]: Started sshd@8-138.199.151.137:22-147.75.109.163:52940.service - OpenSSH per-connection server daemon (147.75.109.163:52940). Jan 29 11:15:47.938366 sshd[4185]: Accepted publickey for core from 147.75.109.163 port 52940 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:15:47.940642 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:47.946079 systemd-logind[1452]: New session 9 of user core. Jan 29 11:15:47.952144 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:15:48.703751 sshd[4187]: Connection closed by 147.75.109.163 port 52940 Jan 29 11:15:48.704623 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:48.709268 systemd[1]: sshd@8-138.199.151.137:22-147.75.109.163:52940.service: Deactivated successfully. Jan 29 11:15:48.711115 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:15:48.712266 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:15:48.713455 systemd-logind[1452]: Removed session 9. Jan 29 11:15:51.895038 update_engine[1456]: I20250129 11:15:51.894321 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:15:51.895038 update_engine[1456]: I20250129 11:15:51.894805 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:15:51.895626 update_engine[1456]: I20250129 11:15:51.895265 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:15:51.895871 update_engine[1456]: E20250129 11:15:51.895782 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:15:51.934050 update_engine[1456]: I20250129 11:15:51.895931 1456 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 11:15:53.879379 systemd[1]: Started sshd@9-138.199.151.137:22-147.75.109.163:50146.service - OpenSSH per-connection server daemon (147.75.109.163:50146). Jan 29 11:15:54.881014 sshd[4201]: Accepted publickey for core from 147.75.109.163 port 50146 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:15:54.883112 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:54.888234 systemd-logind[1452]: New session 10 of user core. Jan 29 11:15:54.897262 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:15:55.649800 sshd[4203]: Connection closed by 147.75.109.163 port 50146 Jan 29 11:15:55.650819 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:55.656830 systemd[1]: sshd@9-138.199.151.137:22-147.75.109.163:50146.service: Deactivated successfully. Jan 29 11:15:55.659455 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:15:55.662264 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:15:55.663743 systemd-logind[1452]: Removed session 10. Jan 29 11:15:55.837353 systemd[1]: Started sshd@10-138.199.151.137:22-147.75.109.163:50150.service - OpenSSH per-connection server daemon (147.75.109.163:50150). Jan 29 11:15:56.823610 sshd[4215]: Accepted publickey for core from 147.75.109.163 port 50150 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:15:56.826233 sshd-session[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:56.832606 systemd-logind[1452]: New session 11 of user core. Jan 29 11:15:56.842262 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:15:57.626646 sshd[4217]: Connection closed by 147.75.109.163 port 50150 Jan 29 11:15:57.627717 sshd-session[4215]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:57.633356 systemd[1]: sshd@10-138.199.151.137:22-147.75.109.163:50150.service: Deactivated successfully. Jan 29 11:15:57.636457 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:15:57.637829 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:15:57.639375 systemd-logind[1452]: Removed session 11. Jan 29 11:15:57.804364 systemd[1]: Started sshd@11-138.199.151.137:22-147.75.109.163:41024.service - OpenSSH per-connection server daemon (147.75.109.163:41024). Jan 29 11:15:58.801114 sshd[4228]: Accepted publickey for core from 147.75.109.163 port 41024 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:15:58.803648 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:58.810398 systemd-logind[1452]: New session 12 of user core. Jan 29 11:15:58.814090 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:15:59.578469 sshd[4230]: Connection closed by 147.75.109.163 port 41024 Jan 29 11:15:59.579510 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:59.585324 systemd[1]: sshd@11-138.199.151.137:22-147.75.109.163:41024.service: Deactivated successfully. Jan 29 11:15:59.588111 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:15:59.589176 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:15:59.590642 systemd-logind[1452]: Removed session 12. Jan 29 11:16:01.896098 update_engine[1456]: I20250129 11:16:01.895309 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:16:01.896098 update_engine[1456]: I20250129 11:16:01.895988 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:16:01.896852 update_engine[1456]: I20250129 11:16:01.896456 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:16:01.897034 update_engine[1456]: E20250129 11:16:01.896960 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:16:01.897147 update_engine[1456]: I20250129 11:16:01.897053 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 11:16:01.897147 update_engine[1456]: I20250129 11:16:01.897069 1456 omaha_request_action.cc:617] Omaha request response: Jan 29 11:16:01.897243 update_engine[1456]: E20250129 11:16:01.897212 1456 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 11:16:01.897289 update_engine[1456]: I20250129 11:16:01.897243 1456 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 11:16:01.897289 update_engine[1456]: I20250129 11:16:01.897255 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 11:16:01.897289 update_engine[1456]: I20250129 11:16:01.897265 1456 update_attempter.cc:306] Processing Done. Jan 29 11:16:01.897424 update_engine[1456]: E20250129 11:16:01.897291 1456 update_attempter.cc:619] Update failed. Jan 29 11:16:01.897424 update_engine[1456]: I20250129 11:16:01.897305 1456 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 11:16:01.897424 update_engine[1456]: I20250129 11:16:01.897316 1456 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 11:16:01.897424 update_engine[1456]: I20250129 11:16:01.897326 1456 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 11:16:01.897587 update_engine[1456]: I20250129 11:16:01.897436 1456 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 11:16:01.897587 update_engine[1456]: I20250129 11:16:01.897473 1456 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 11:16:01.897587 update_engine[1456]: I20250129 11:16:01.897485 1456 omaha_request_action.cc:272] Request: Jan 29 11:16:01.897587 update_engine[1456]: Jan 29 11:16:01.897587 update_engine[1456]: Jan 29 11:16:01.897587 update_engine[1456]: Jan 29 11:16:01.897587 update_engine[1456]: Jan 29 11:16:01.897587 update_engine[1456]: Jan 29 11:16:01.897587 update_engine[1456]: Jan 29 11:16:01.897587 update_engine[1456]: I20250129 11:16:01.897497 1456 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 11:16:01.898000 update_engine[1456]: I20250129 11:16:01.897742 1456 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 11:16:01.898315 update_engine[1456]: I20250129 11:16:01.898069 1456 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 11:16:01.898630 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 11:16:01.899179 update_engine[1456]: E20250129 11:16:01.898731 1456 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 11:16:01.899179 update_engine[1456]: I20250129 11:16:01.898804 1456 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 11:16:01.899179 update_engine[1456]: I20250129 11:16:01.898818 1456 omaha_request_action.cc:617] Omaha request response: Jan 29 11:16:01.899179 update_engine[1456]: I20250129 11:16:01.898831 1456 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 11:16:01.899179 update_engine[1456]: I20250129 11:16:01.898842 1456 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 11:16:01.899179 update_engine[1456]: I20250129 11:16:01.898852 1456 update_attempter.cc:306] Processing Done. Jan 29 11:16:01.899179 update_engine[1456]: I20250129 11:16:01.898864 1456 update_attempter.cc:310] Error event sent. Jan 29 11:16:01.899179 update_engine[1456]: I20250129 11:16:01.898912 1456 update_check_scheduler.cc:74] Next update check in 49m18s Jan 29 11:16:01.899521 locksmithd[1487]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 11:16:04.761141 systemd[1]: Started sshd@12-138.199.151.137:22-147.75.109.163:41040.service - OpenSSH per-connection server daemon (147.75.109.163:41040). Jan 29 11:16:05.766952 sshd[4241]: Accepted publickey for core from 147.75.109.163 port 41040 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:05.768483 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:05.774777 systemd-logind[1452]: New session 13 of user core. Jan 29 11:16:05.782557 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:16:06.527449 sshd[4243]: Connection closed by 147.75.109.163 port 41040 Jan 29 11:16:06.528497 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:06.534128 systemd[1]: sshd@12-138.199.151.137:22-147.75.109.163:41040.service: Deactivated successfully. Jan 29 11:16:06.537348 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:16:06.538799 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:16:06.539802 systemd-logind[1452]: Removed session 13. Jan 29 11:16:11.713388 systemd[1]: Started sshd@13-138.199.151.137:22-147.75.109.163:60100.service - OpenSSH per-connection server daemon (147.75.109.163:60100). Jan 29 11:16:12.706939 sshd[4254]: Accepted publickey for core from 147.75.109.163 port 60100 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:12.709858 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:12.717969 systemd-logind[1452]: New session 14 of user core. Jan 29 11:16:12.726148 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:16:13.457646 sshd[4256]: Connection closed by 147.75.109.163 port 60100 Jan 29 11:16:13.458704 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:13.464333 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:16:13.465200 systemd[1]: sshd@13-138.199.151.137:22-147.75.109.163:60100.service: Deactivated successfully. Jan 29 11:16:13.467973 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:16:13.470773 systemd-logind[1452]: Removed session 14. Jan 29 11:16:13.642486 systemd[1]: Started sshd@14-138.199.151.137:22-147.75.109.163:60114.service - OpenSSH per-connection server daemon (147.75.109.163:60114). Jan 29 11:16:14.640050 sshd[4267]: Accepted publickey for core from 147.75.109.163 port 60114 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:14.642165 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:14.649699 systemd-logind[1452]: New session 15 of user core. Jan 29 11:16:14.654129 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:16:15.444879 sshd[4269]: Connection closed by 147.75.109.163 port 60114 Jan 29 11:16:15.445973 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:15.451391 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:16:15.451634 systemd[1]: sshd@14-138.199.151.137:22-147.75.109.163:60114.service: Deactivated successfully. Jan 29 11:16:15.455139 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:16:15.457544 systemd-logind[1452]: Removed session 15. Jan 29 11:16:15.619381 systemd[1]: Started sshd@15-138.199.151.137:22-147.75.109.163:60118.service - OpenSSH per-connection server daemon (147.75.109.163:60118). Jan 29 11:16:16.605785 sshd[4279]: Accepted publickey for core from 147.75.109.163 port 60118 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:16.607256 sshd-session[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:16.616880 systemd-logind[1452]: New session 16 of user core. Jan 29 11:16:16.619227 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:16:18.925422 sshd[4281]: Connection closed by 147.75.109.163 port 60118 Jan 29 11:16:18.926515 sshd-session[4279]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:18.930329 systemd[1]: sshd@15-138.199.151.137:22-147.75.109.163:60118.service: Deactivated successfully. Jan 29 11:16:18.933701 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:16:18.936394 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:16:18.937926 systemd-logind[1452]: Removed session 16. Jan 29 11:16:19.105448 systemd[1]: Started sshd@16-138.199.151.137:22-147.75.109.163:37190.service - OpenSSH per-connection server daemon (147.75.109.163:37190). Jan 29 11:16:20.082687 sshd[4299]: Accepted publickey for core from 147.75.109.163 port 37190 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:20.084992 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:20.091443 systemd-logind[1452]: New session 17 of user core. Jan 29 11:16:20.098260 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:16:20.970645 sshd[4301]: Connection closed by 147.75.109.163 port 37190 Jan 29 11:16:20.970434 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:20.976776 systemd[1]: sshd@16-138.199.151.137:22-147.75.109.163:37190.service: Deactivated successfully. Jan 29 11:16:20.980372 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:16:20.981870 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:16:20.982956 systemd-logind[1452]: Removed session 17. Jan 29 11:16:21.162761 systemd[1]: Started sshd@17-138.199.151.137:22-147.75.109.163:37200.service - OpenSSH per-connection server daemon (147.75.109.163:37200). Jan 29 11:16:22.166478 sshd[4310]: Accepted publickey for core from 147.75.109.163 port 37200 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:22.168791 sshd-session[4310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:22.174243 systemd-logind[1452]: New session 18 of user core. Jan 29 11:16:22.181313 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:16:22.932371 sshd[4312]: Connection closed by 147.75.109.163 port 37200 Jan 29 11:16:22.931942 sshd-session[4310]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:22.937975 systemd[1]: sshd@17-138.199.151.137:22-147.75.109.163:37200.service: Deactivated successfully. Jan 29 11:16:22.941625 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:16:22.944528 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:16:22.946352 systemd-logind[1452]: Removed session 18. Jan 29 11:16:28.105262 systemd[1]: Started sshd@18-138.199.151.137:22-147.75.109.163:55442.service - OpenSSH per-connection server daemon (147.75.109.163:55442). Jan 29 11:16:29.093219 sshd[4328]: Accepted publickey for core from 147.75.109.163 port 55442 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:29.094790 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:29.101366 systemd-logind[1452]: New session 19 of user core. Jan 29 11:16:29.109324 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:16:29.851704 sshd[4330]: Connection closed by 147.75.109.163 port 55442 Jan 29 11:16:29.851380 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:29.859618 systemd[1]: sshd@18-138.199.151.137:22-147.75.109.163:55442.service: Deactivated successfully. Jan 29 11:16:29.862740 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:16:29.864062 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:16:29.866536 systemd-logind[1452]: Removed session 19. Jan 29 11:16:35.029330 systemd[1]: Started sshd@19-138.199.151.137:22-147.75.109.163:55456.service - OpenSSH per-connection server daemon (147.75.109.163:55456). Jan 29 11:16:36.033947 sshd[4340]: Accepted publickey for core from 147.75.109.163 port 55456 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:36.035601 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:36.040503 systemd-logind[1452]: New session 20 of user core. Jan 29 11:16:36.049316 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:16:36.796960 sshd[4342]: Connection closed by 147.75.109.163 port 55456 Jan 29 11:16:36.795264 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:36.800501 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:16:36.801352 systemd[1]: sshd@19-138.199.151.137:22-147.75.109.163:55456.service: Deactivated successfully. Jan 29 11:16:36.803813 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:16:36.806504 systemd-logind[1452]: Removed session 20. Jan 29 11:16:36.973352 systemd[1]: Started sshd@20-138.199.151.137:22-147.75.109.163:55470.service - OpenSSH per-connection server daemon (147.75.109.163:55470). Jan 29 11:16:37.959527 sshd[4353]: Accepted publickey for core from 147.75.109.163 port 55470 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:37.962498 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:37.970620 systemd-logind[1452]: New session 21 of user core. Jan 29 11:16:37.975825 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:16:40.534827 containerd[1474]: time="2025-01-29T11:16:40.534611597Z" level=info msg="StopContainer for \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\" with timeout 30 (s)" Jan 29 11:16:40.539486 containerd[1474]: time="2025-01-29T11:16:40.539310625Z" level=info msg="Stop container \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\" with signal terminated" Jan 29 11:16:40.551706 containerd[1474]: time="2025-01-29T11:16:40.551640675Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:16:40.552451 systemd[1]: cri-containerd-6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251.scope: Deactivated successfully. Jan 29 11:16:40.570185 containerd[1474]: time="2025-01-29T11:16:40.570147909Z" level=info msg="StopContainer for \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\" with timeout 2 (s)" Jan 29 11:16:40.571020 containerd[1474]: time="2025-01-29T11:16:40.570993267Z" level=info msg="Stop container \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\" with signal terminated" Jan 29 11:16:40.586871 systemd-networkd[1370]: lxc_health: Link DOWN Jan 29 11:16:40.586880 systemd-networkd[1370]: lxc_health: Lost carrier Jan 29 11:16:40.589741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251-rootfs.mount: Deactivated successfully. Jan 29 11:16:40.610980 systemd[1]: cri-containerd-b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301.scope: Deactivated successfully. Jan 29 11:16:40.613274 systemd[1]: cri-containerd-b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301.scope: Consumed 8.083s CPU time. Jan 29 11:16:40.614248 containerd[1474]: time="2025-01-29T11:16:40.613241843Z" level=info msg="shim disconnected" id=6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251 namespace=k8s.io Jan 29 11:16:40.614248 containerd[1474]: time="2025-01-29T11:16:40.613422323Z" level=warning msg="cleaning up after shim disconnected" id=6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251 namespace=k8s.io Jan 29 11:16:40.614248 containerd[1474]: time="2025-01-29T11:16:40.614107281Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:40.640463 containerd[1474]: time="2025-01-29T11:16:40.640412816Z" level=info msg="StopContainer for \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\" returns successfully" Jan 29 11:16:40.641972 containerd[1474]: time="2025-01-29T11:16:40.641701053Z" level=info msg="StopPodSandbox for \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\"" Jan 29 11:16:40.641972 containerd[1474]: time="2025-01-29T11:16:40.641750453Z" level=info msg="Container to stop \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:16:40.646191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8-shm.mount: Deactivated successfully. Jan 29 11:16:40.652801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301-rootfs.mount: Deactivated successfully. Jan 29 11:16:40.656930 systemd[1]: cri-containerd-ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8.scope: Deactivated successfully. Jan 29 11:16:40.670251 containerd[1474]: time="2025-01-29T11:16:40.670123463Z" level=info msg="shim disconnected" id=b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301 namespace=k8s.io Jan 29 11:16:40.670251 containerd[1474]: time="2025-01-29T11:16:40.670214863Z" level=warning msg="cleaning up after shim disconnected" id=b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301 namespace=k8s.io Jan 29 11:16:40.670251 containerd[1474]: time="2025-01-29T11:16:40.670226423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:40.698865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8-rootfs.mount: Deactivated successfully. Jan 29 11:16:40.701066 containerd[1474]: time="2025-01-29T11:16:40.700800947Z" level=info msg="StopContainer for \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\" returns successfully" Jan 29 11:16:40.710175 containerd[1474]: time="2025-01-29T11:16:40.701574746Z" level=info msg="StopPodSandbox for \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\"" Jan 29 11:16:40.710175 containerd[1474]: time="2025-01-29T11:16:40.701611705Z" level=info msg="Container to stop \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:16:40.710175 containerd[1474]: time="2025-01-29T11:16:40.701624465Z" level=info msg="Container to stop \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:16:40.710175 containerd[1474]: time="2025-01-29T11:16:40.701632785Z" level=info msg="Container to stop \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:16:40.710175 containerd[1474]: time="2025-01-29T11:16:40.701643065Z" level=info msg="Container to stop \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:16:40.710175 containerd[1474]: time="2025-01-29T11:16:40.701652345Z" level=info msg="Container to stop \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:16:40.707777 systemd[1]: cri-containerd-3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad.scope: Deactivated successfully. Jan 29 11:16:40.720293 containerd[1474]: time="2025-01-29T11:16:40.719894740Z" level=info msg="shim disconnected" id=ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8 namespace=k8s.io Jan 29 11:16:40.720293 containerd[1474]: time="2025-01-29T11:16:40.720016500Z" level=warning msg="cleaning up after shim disconnected" id=ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8 namespace=k8s.io Jan 29 11:16:40.720293 containerd[1474]: time="2025-01-29T11:16:40.720026300Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:40.742648 containerd[1474]: time="2025-01-29T11:16:40.742410205Z" level=info msg="shim disconnected" id=3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad namespace=k8s.io Jan 29 11:16:40.742648 containerd[1474]: time="2025-01-29T11:16:40.742474245Z" level=warning msg="cleaning up after shim disconnected" id=3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad namespace=k8s.io Jan 29 11:16:40.742648 containerd[1474]: time="2025-01-29T11:16:40.742482845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:40.750614 containerd[1474]: time="2025-01-29T11:16:40.750554785Z" level=info msg="TearDown network for sandbox \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\" successfully" Jan 29 11:16:40.750614 containerd[1474]: time="2025-01-29T11:16:40.750596825Z" level=info msg="StopPodSandbox for \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\" returns successfully" Jan 29 11:16:40.769044 containerd[1474]: time="2025-01-29T11:16:40.768567541Z" level=info msg="TearDown network for sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" successfully" Jan 29 11:16:40.769044 containerd[1474]: time="2025-01-29T11:16:40.768609940Z" level=info msg="StopPodSandbox for \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" returns successfully" Jan 29 11:16:40.797679 kubelet[2751]: E0129 11:16:40.796404 2751 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:16:40.858874 kubelet[2751]: I0129 11:16:40.858254 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f224cc2-8b4b-4085-9f27-180208d3eb0e-clustermesh-secrets\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.858874 kubelet[2751]: I0129 11:16:40.858322 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cni-path\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.858874 kubelet[2751]: I0129 11:16:40.858359 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-config-path\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.858874 kubelet[2751]: I0129 11:16:40.858434 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f224cc2-8b4b-4085-9f27-180208d3eb0e-hubble-tls\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.858874 kubelet[2751]: I0129 11:16:40.858464 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-cgroup\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.858874 kubelet[2751]: I0129 11:16:40.858490 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-host-proc-sys-kernel\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859157 kubelet[2751]: I0129 11:16:40.858516 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-bpf-maps\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859157 kubelet[2751]: I0129 11:16:40.858547 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57b68f2e-7e28-4105-b1b9-fe9478579790-cilium-config-path\") pod \"57b68f2e-7e28-4105-b1b9-fe9478579790\" (UID: \"57b68f2e-7e28-4105-b1b9-fe9478579790\") " Jan 29 11:16:40.859157 kubelet[2751]: I0129 11:16:40.858573 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-etc-cni-netd\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859157 kubelet[2751]: I0129 11:16:40.858601 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-run\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859157 kubelet[2751]: I0129 11:16:40.858627 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-lib-modules\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859157 kubelet[2751]: I0129 11:16:40.858657 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df2rp\" (UniqueName: \"kubernetes.io/projected/6f224cc2-8b4b-4085-9f27-180208d3eb0e-kube-api-access-df2rp\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859290 kubelet[2751]: I0129 11:16:40.858685 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-hostproc\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859290 kubelet[2751]: I0129 11:16:40.858709 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-xtables-lock\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859290 kubelet[2751]: I0129 11:16:40.858734 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-host-proc-sys-net\") pod \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\" (UID: \"6f224cc2-8b4b-4085-9f27-180208d3eb0e\") " Jan 29 11:16:40.859290 kubelet[2751]: I0129 11:16:40.858766 2751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tndwl\" (UniqueName: \"kubernetes.io/projected/57b68f2e-7e28-4105-b1b9-fe9478579790-kube-api-access-tndwl\") pod \"57b68f2e-7e28-4105-b1b9-fe9478579790\" (UID: \"57b68f2e-7e28-4105-b1b9-fe9478579790\") " Jan 29 11:16:40.862561 kubelet[2751]: I0129 11:16:40.861920 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.862561 kubelet[2751]: I0129 11:16:40.862101 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.862561 kubelet[2751]: I0129 11:16:40.862122 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.862561 kubelet[2751]: I0129 11:16:40.862938 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.863405 kubelet[2751]: I0129 11:16:40.863300 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.863405 kubelet[2751]: I0129 11:16:40.863348 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.863405 kubelet[2751]: I0129 11:16:40.863367 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.864975 kubelet[2751]: I0129 11:16:40.864478 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.864975 kubelet[2751]: I0129 11:16:40.864526 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.864975 kubelet[2751]: I0129 11:16:40.864545 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:16:40.864975 kubelet[2751]: I0129 11:16:40.864742 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57b68f2e-7e28-4105-b1b9-fe9478579790-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "57b68f2e-7e28-4105-b1b9-fe9478579790" (UID: "57b68f2e-7e28-4105-b1b9-fe9478579790"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:40.868748 kubelet[2751]: I0129 11:16:40.868493 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f224cc2-8b4b-4085-9f27-180208d3eb0e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:16:40.869108 kubelet[2751]: I0129 11:16:40.869076 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b68f2e-7e28-4105-b1b9-fe9478579790-kube-api-access-tndwl" (OuterVolumeSpecName: "kube-api-access-tndwl") pod "57b68f2e-7e28-4105-b1b9-fe9478579790" (UID: "57b68f2e-7e28-4105-b1b9-fe9478579790"). InnerVolumeSpecName "kube-api-access-tndwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:40.869255 kubelet[2751]: I0129 11:16:40.869229 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f224cc2-8b4b-4085-9f27-180208d3eb0e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:40.869708 kubelet[2751]: I0129 11:16:40.869657 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:16:40.870295 kubelet[2751]: I0129 11:16:40.870245 2751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f224cc2-8b4b-4085-9f27-180208d3eb0e-kube-api-access-df2rp" (OuterVolumeSpecName: "kube-api-access-df2rp") pod "6f224cc2-8b4b-4085-9f27-180208d3eb0e" (UID: "6f224cc2-8b4b-4085-9f27-180208d3eb0e"). InnerVolumeSpecName "kube-api-access-df2rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:16:40.960032 kubelet[2751]: I0129 11:16:40.959962 2751 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-xtables-lock\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960032 kubelet[2751]: I0129 11:16:40.960003 2751 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-host-proc-sys-net\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960032 kubelet[2751]: I0129 11:16:40.960015 2751 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tndwl\" (UniqueName: \"kubernetes.io/projected/57b68f2e-7e28-4105-b1b9-fe9478579790-kube-api-access-tndwl\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960032 kubelet[2751]: I0129 11:16:40.960028 2751 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-hostproc\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960032 kubelet[2751]: I0129 11:16:40.960037 2751 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f224cc2-8b4b-4085-9f27-180208d3eb0e-clustermesh-secrets\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960032 kubelet[2751]: I0129 11:16:40.960050 2751 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cni-path\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960032 kubelet[2751]: I0129 11:16:40.960060 2751 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-config-path\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960032 kubelet[2751]: I0129 11:16:40.960068 2751 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f224cc2-8b4b-4085-9f27-180208d3eb0e-hubble-tls\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960481 kubelet[2751]: I0129 11:16:40.960076 2751 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-cgroup\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960481 kubelet[2751]: I0129 11:16:40.960085 2751 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-host-proc-sys-kernel\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960481 kubelet[2751]: I0129 11:16:40.960092 2751 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-bpf-maps\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960481 kubelet[2751]: I0129 11:16:40.960101 2751 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57b68f2e-7e28-4105-b1b9-fe9478579790-cilium-config-path\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960481 kubelet[2751]: I0129 11:16:40.960109 2751 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-etc-cni-netd\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960481 kubelet[2751]: I0129 11:16:40.960118 2751 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-cilium-run\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960481 kubelet[2751]: I0129 11:16:40.960127 2751 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-df2rp\" (UniqueName: \"kubernetes.io/projected/6f224cc2-8b4b-4085-9f27-180208d3eb0e-kube-api-access-df2rp\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:40.960481 kubelet[2751]: I0129 11:16:40.960135 2751 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f224cc2-8b4b-4085-9f27-180208d3eb0e-lib-modules\") on node \"ci-4152-2-0-b-e71ed2fe96\" DevicePath \"\"" Jan 29 11:16:41.529646 systemd[1]: var-lib-kubelet-pods-57b68f2e\x2d7e28\x2d4105\x2db1b9\x2dfe9478579790-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtndwl.mount: Deactivated successfully. Jan 29 11:16:41.529799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad-rootfs.mount: Deactivated successfully. Jan 29 11:16:41.529909 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad-shm.mount: Deactivated successfully. Jan 29 11:16:41.530000 systemd[1]: var-lib-kubelet-pods-6f224cc2\x2d8b4b\x2d4085\x2d9f27\x2d180208d3eb0e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddf2rp.mount: Deactivated successfully. Jan 29 11:16:41.530050 systemd[1]: var-lib-kubelet-pods-6f224cc2\x2d8b4b\x2d4085\x2d9f27\x2d180208d3eb0e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:16:41.530099 systemd[1]: var-lib-kubelet-pods-6f224cc2\x2d8b4b\x2d4085\x2d9f27\x2d180208d3eb0e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:16:41.608376 kubelet[2751]: I0129 11:16:41.607680 2751 scope.go:117] "RemoveContainer" containerID="b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301" Jan 29 11:16:41.613157 containerd[1474]: time="2025-01-29T11:16:41.613100615Z" level=info msg="RemoveContainer for \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\"" Jan 29 11:16:41.620566 containerd[1474]: time="2025-01-29T11:16:41.620525917Z" level=info msg="RemoveContainer for \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\" returns successfully" Jan 29 11:16:41.620981 kubelet[2751]: I0129 11:16:41.620957 2751 scope.go:117] "RemoveContainer" containerID="76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4" Jan 29 11:16:41.622713 systemd[1]: Removed slice kubepods-besteffort-pod57b68f2e_7e28_4105_b1b9_fe9478579790.slice - libcontainer container kubepods-besteffort-pod57b68f2e_7e28_4105_b1b9_fe9478579790.slice. Jan 29 11:16:41.626399 systemd[1]: Removed slice kubepods-burstable-pod6f224cc2_8b4b_4085_9f27_180208d3eb0e.slice - libcontainer container kubepods-burstable-pod6f224cc2_8b4b_4085_9f27_180208d3eb0e.slice. Jan 29 11:16:41.626498 systemd[1]: kubepods-burstable-pod6f224cc2_8b4b_4085_9f27_180208d3eb0e.slice: Consumed 8.178s CPU time. Jan 29 11:16:41.628035 containerd[1474]: time="2025-01-29T11:16:41.628002579Z" level=info msg="RemoveContainer for \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\"" Jan 29 11:16:41.632451 containerd[1474]: time="2025-01-29T11:16:41.632407089Z" level=info msg="RemoveContainer for \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\" returns successfully" Jan 29 11:16:41.632814 kubelet[2751]: I0129 11:16:41.632709 2751 scope.go:117] "RemoveContainer" containerID="4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec" Jan 29 11:16:41.634398 containerd[1474]: time="2025-01-29T11:16:41.634216524Z" level=info msg="RemoveContainer for \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\"" Jan 29 11:16:41.637224 containerd[1474]: time="2025-01-29T11:16:41.637194637Z" level=info msg="RemoveContainer for \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\" returns successfully" Jan 29 11:16:41.637457 kubelet[2751]: I0129 11:16:41.637380 2751 scope.go:117] "RemoveContainer" containerID="2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4" Jan 29 11:16:41.638330 containerd[1474]: time="2025-01-29T11:16:41.638269355Z" level=info msg="RemoveContainer for \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\"" Jan 29 11:16:41.645021 containerd[1474]: time="2025-01-29T11:16:41.644707179Z" level=info msg="RemoveContainer for \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\" returns successfully" Jan 29 11:16:41.646361 kubelet[2751]: I0129 11:16:41.645242 2751 scope.go:117] "RemoveContainer" containerID="1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395" Jan 29 11:16:41.650014 containerd[1474]: time="2025-01-29T11:16:41.649971446Z" level=info msg="RemoveContainer for \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\"" Jan 29 11:16:41.654233 containerd[1474]: time="2025-01-29T11:16:41.654094916Z" level=info msg="RemoveContainer for \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\" returns successfully" Jan 29 11:16:41.655132 kubelet[2751]: I0129 11:16:41.654633 2751 scope.go:117] "RemoveContainer" containerID="b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301" Jan 29 11:16:41.655480 containerd[1474]: time="2025-01-29T11:16:41.655241954Z" level=error msg="ContainerStatus for \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\": not found" Jan 29 11:16:41.655851 kubelet[2751]: E0129 11:16:41.655658 2751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\": not found" containerID="b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301" Jan 29 11:16:41.655851 kubelet[2751]: I0129 11:16:41.655689 2751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301"} err="failed to get container status \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\": rpc error: code = NotFound desc = an error occurred when try to find container \"b69317f1871c4cad4737d46b4a92fff6afc47520c2bbe6af52f8aa8d217af301\": not found" Jan 29 11:16:41.655851 kubelet[2751]: I0129 11:16:41.655768 2751 scope.go:117] "RemoveContainer" containerID="76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4" Jan 29 11:16:41.656047 containerd[1474]: time="2025-01-29T11:16:41.655988792Z" level=error msg="ContainerStatus for \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\": not found" Jan 29 11:16:41.656209 kubelet[2751]: E0129 11:16:41.656118 2751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\": not found" containerID="76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4" Jan 29 11:16:41.656303 kubelet[2751]: I0129 11:16:41.656207 2751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4"} err="failed to get container status \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\": rpc error: code = NotFound desc = an error occurred when try to find container \"76c9d6428ddd005da0ddf00ea2309eaad14710b1b59aeda9b8e6cc7833b869e4\": not found" Jan 29 11:16:41.656303 kubelet[2751]: I0129 11:16:41.656242 2751 scope.go:117] "RemoveContainer" containerID="4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec" Jan 29 11:16:41.657345 containerd[1474]: time="2025-01-29T11:16:41.657296069Z" level=error msg="ContainerStatus for \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\": not found" Jan 29 11:16:41.657468 kubelet[2751]: E0129 11:16:41.657443 2751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\": not found" containerID="4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec" Jan 29 11:16:41.657587 kubelet[2751]: I0129 11:16:41.657473 2751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec"} err="failed to get container status \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"4110be6c9c13a636586b7a263e2d1b7cad3efea0996c289acf25baf45f9174ec\": not found" Jan 29 11:16:41.657587 kubelet[2751]: I0129 11:16:41.657490 2751 scope.go:117] "RemoveContainer" containerID="2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4" Jan 29 11:16:41.657816 containerd[1474]: time="2025-01-29T11:16:41.657792588Z" level=error msg="ContainerStatus for \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\": not found" Jan 29 11:16:41.658224 kubelet[2751]: E0129 11:16:41.658006 2751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\": not found" containerID="2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4" Jan 29 11:16:41.658224 kubelet[2751]: I0129 11:16:41.658145 2751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4"} err="failed to get container status \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"2eecd208c09017684c19e7b97b11499ca10d1adb41a3a3ce4b86f424b2913fb4\": not found" Jan 29 11:16:41.658224 kubelet[2751]: I0129 11:16:41.658164 2751 scope.go:117] "RemoveContainer" containerID="1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395" Jan 29 11:16:41.658457 containerd[1474]: time="2025-01-29T11:16:41.658398226Z" level=error msg="ContainerStatus for \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\": not found" Jan 29 11:16:41.658608 kubelet[2751]: E0129 11:16:41.658556 2751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\": not found" containerID="1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395" Jan 29 11:16:41.658608 kubelet[2751]: I0129 11:16:41.658601 2751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395"} err="failed to get container status \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\": rpc error: code = NotFound desc = an error occurred when try to find container \"1859df344784f94afb02f19430fa441cd68f3f86ac37062b7836ab3d8dff5395\": not found" Jan 29 11:16:41.658684 kubelet[2751]: I0129 11:16:41.658617 2751 scope.go:117] "RemoveContainer" containerID="6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251" Jan 29 11:16:41.659816 containerd[1474]: time="2025-01-29T11:16:41.659794503Z" level=info msg="RemoveContainer for \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\"" Jan 29 11:16:41.664384 containerd[1474]: time="2025-01-29T11:16:41.664333972Z" level=info msg="RemoveContainer for \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\" returns successfully" Jan 29 11:16:41.664699 kubelet[2751]: I0129 11:16:41.664622 2751 scope.go:117] "RemoveContainer" containerID="6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251" Jan 29 11:16:41.664990 containerd[1474]: time="2025-01-29T11:16:41.664933370Z" level=error msg="ContainerStatus for \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\": not found" Jan 29 11:16:41.665081 kubelet[2751]: E0129 11:16:41.665064 2751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\": not found" containerID="6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251" Jan 29 11:16:41.665081 kubelet[2751]: I0129 11:16:41.665089 2751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251"} err="failed to get container status \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\": rpc error: code = NotFound desc = an error occurred when try to find container \"6da080af8978ae4edf8c0cdc8af0533252fa4e73f3c1b75a28edb5fe07a99251\": not found" Jan 29 11:16:42.611054 sshd[4355]: Connection closed by 147.75.109.163 port 55470 Jan 29 11:16:42.612388 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:42.619468 kubelet[2751]: I0129 11:16:42.619419 2751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57b68f2e-7e28-4105-b1b9-fe9478579790" path="/var/lib/kubelet/pods/57b68f2e-7e28-4105-b1b9-fe9478579790/volumes" Jan 29 11:16:42.622402 kubelet[2751]: I0129 11:16:42.621195 2751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f224cc2-8b4b-4085-9f27-180208d3eb0e" path="/var/lib/kubelet/pods/6f224cc2-8b4b-4085-9f27-180208d3eb0e/volumes" Jan 29 11:16:42.623672 systemd[1]: sshd@20-138.199.151.137:22-147.75.109.163:55470.service: Deactivated successfully. Jan 29 11:16:42.626785 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:16:42.627738 systemd[1]: session-21.scope: Consumed 1.394s CPU time. Jan 29 11:16:42.629367 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:16:42.631676 systemd-logind[1452]: Removed session 21. Jan 29 11:16:42.783218 systemd[1]: Started sshd@21-138.199.151.137:22-147.75.109.163:36458.service - OpenSSH per-connection server daemon (147.75.109.163:36458). Jan 29 11:16:43.768464 sshd[4517]: Accepted publickey for core from 147.75.109.163 port 36458 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:43.770455 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:43.777471 systemd-logind[1452]: New session 22 of user core. Jan 29 11:16:43.784091 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:16:44.614184 kubelet[2751]: E0129 11:16:44.614130 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gxm42" podUID="f53165f6-06fd-4922-a608-d8be5ebba892" Jan 29 11:16:45.266413 kubelet[2751]: E0129 11:16:45.266345 2751 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f224cc2-8b4b-4085-9f27-180208d3eb0e" containerName="mount-cgroup" Jan 29 11:16:45.266413 kubelet[2751]: E0129 11:16:45.266388 2751 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="57b68f2e-7e28-4105-b1b9-fe9478579790" containerName="cilium-operator" Jan 29 11:16:45.266413 kubelet[2751]: E0129 11:16:45.266396 2751 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f224cc2-8b4b-4085-9f27-180208d3eb0e" containerName="clean-cilium-state" Jan 29 11:16:45.266413 kubelet[2751]: E0129 11:16:45.266402 2751 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f224cc2-8b4b-4085-9f27-180208d3eb0e" containerName="cilium-agent" Jan 29 11:16:45.266413 kubelet[2751]: E0129 11:16:45.266409 2751 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f224cc2-8b4b-4085-9f27-180208d3eb0e" containerName="apply-sysctl-overwrites" Jan 29 11:16:45.266413 kubelet[2751]: E0129 11:16:45.266415 2751 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f224cc2-8b4b-4085-9f27-180208d3eb0e" containerName="mount-bpf-fs" Jan 29 11:16:45.266673 kubelet[2751]: I0129 11:16:45.266439 2751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f224cc2-8b4b-4085-9f27-180208d3eb0e" containerName="cilium-agent" Jan 29 11:16:45.266673 kubelet[2751]: I0129 11:16:45.266446 2751 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b68f2e-7e28-4105-b1b9-fe9478579790" containerName="cilium-operator" Jan 29 11:16:45.278970 systemd[1]: Created slice kubepods-burstable-poda15a22e7_dd2d_49e2_9111_e52795869cfc.slice - libcontainer container kubepods-burstable-poda15a22e7_dd2d_49e2_9111_e52795869cfc.slice. Jan 29 11:16:45.393162 kubelet[2751]: I0129 11:16:45.392974 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-bpf-maps\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393162 kubelet[2751]: I0129 11:16:45.393041 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-host-proc-sys-net\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393162 kubelet[2751]: I0129 11:16:45.393069 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-host-proc-sys-kernel\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393478 kubelet[2751]: I0129 11:16:45.393231 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-hostproc\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393478 kubelet[2751]: I0129 11:16:45.393264 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-cilium-cgroup\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393478 kubelet[2751]: I0129 11:16:45.393285 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-cni-path\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393478 kubelet[2751]: I0129 11:16:45.393309 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-xtables-lock\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393478 kubelet[2751]: I0129 11:16:45.393331 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g67wr\" (UniqueName: \"kubernetes.io/projected/a15a22e7-dd2d-49e2-9111-e52795869cfc-kube-api-access-g67wr\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393478 kubelet[2751]: I0129 11:16:45.393360 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-lib-modules\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393742 kubelet[2751]: I0129 11:16:45.393379 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-etc-cni-netd\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393742 kubelet[2751]: I0129 11:16:45.393400 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a15a22e7-dd2d-49e2-9111-e52795869cfc-cilium-run\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393742 kubelet[2751]: I0129 11:16:45.393420 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a15a22e7-dd2d-49e2-9111-e52795869cfc-clustermesh-secrets\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393742 kubelet[2751]: I0129 11:16:45.393443 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a15a22e7-dd2d-49e2-9111-e52795869cfc-cilium-config-path\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393742 kubelet[2751]: I0129 11:16:45.393462 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a15a22e7-dd2d-49e2-9111-e52795869cfc-cilium-ipsec-secrets\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.393742 kubelet[2751]: I0129 11:16:45.393481 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a15a22e7-dd2d-49e2-9111-e52795869cfc-hubble-tls\") pod \"cilium-6g5vm\" (UID: \"a15a22e7-dd2d-49e2-9111-e52795869cfc\") " pod="kube-system/cilium-6g5vm" Jan 29 11:16:45.441093 sshd[4519]: Connection closed by 147.75.109.163 port 36458 Jan 29 11:16:45.442158 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:45.448737 systemd[1]: sshd@21-138.199.151.137:22-147.75.109.163:36458.service: Deactivated successfully. Jan 29 11:16:45.451775 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:16:45.453243 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:16:45.454556 systemd-logind[1452]: Removed session 22. Jan 29 11:16:45.586247 containerd[1474]: time="2025-01-29T11:16:45.586027118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6g5vm,Uid:a15a22e7-dd2d-49e2-9111-e52795869cfc,Namespace:kube-system,Attempt:0,}" Jan 29 11:16:45.617132 containerd[1474]: time="2025-01-29T11:16:45.617005171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:16:45.617132 containerd[1474]: time="2025-01-29T11:16:45.617067170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:16:45.617132 containerd[1474]: time="2025-01-29T11:16:45.617083770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:45.618088 containerd[1474]: time="2025-01-29T11:16:45.617163410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:45.624224 systemd[1]: Started sshd@22-138.199.151.137:22-147.75.109.163:36464.service - OpenSSH per-connection server daemon (147.75.109.163:36464). Jan 29 11:16:45.645114 systemd[1]: Started cri-containerd-6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2.scope - libcontainer container 6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2. Jan 29 11:16:45.680900 containerd[1474]: time="2025-01-29T11:16:45.678626356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6g5vm,Uid:a15a22e7-dd2d-49e2-9111-e52795869cfc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\"" Jan 29 11:16:45.688536 containerd[1474]: time="2025-01-29T11:16:45.687347297Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:16:45.700226 containerd[1474]: time="2025-01-29T11:16:45.700181829Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9f15ac3f8903344ac99bf745b7040c053062d1de8bdab32ae72475dd24d13d9\"" Jan 29 11:16:45.702250 containerd[1474]: time="2025-01-29T11:16:45.701099507Z" level=info msg="StartContainer for \"d9f15ac3f8903344ac99bf745b7040c053062d1de8bdab32ae72475dd24d13d9\"" Jan 29 11:16:45.736094 systemd[1]: Started cri-containerd-d9f15ac3f8903344ac99bf745b7040c053062d1de8bdab32ae72475dd24d13d9.scope - libcontainer container d9f15ac3f8903344ac99bf745b7040c053062d1de8bdab32ae72475dd24d13d9. Jan 29 11:16:45.770022 containerd[1474]: time="2025-01-29T11:16:45.769948956Z" level=info msg="StartContainer for \"d9f15ac3f8903344ac99bf745b7040c053062d1de8bdab32ae72475dd24d13d9\" returns successfully" Jan 29 11:16:45.779288 systemd[1]: cri-containerd-d9f15ac3f8903344ac99bf745b7040c053062d1de8bdab32ae72475dd24d13d9.scope: Deactivated successfully. Jan 29 11:16:45.798484 kubelet[2751]: E0129 11:16:45.798385 2751 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:16:45.819838 containerd[1474]: time="2025-01-29T11:16:45.819514448Z" level=info msg="shim disconnected" id=d9f15ac3f8903344ac99bf745b7040c053062d1de8bdab32ae72475dd24d13d9 namespace=k8s.io Jan 29 11:16:45.819838 containerd[1474]: time="2025-01-29T11:16:45.819600888Z" level=warning msg="cleaning up after shim disconnected" id=d9f15ac3f8903344ac99bf745b7040c053062d1de8bdab32ae72475dd24d13d9 namespace=k8s.io Jan 29 11:16:45.819838 containerd[1474]: time="2025-01-29T11:16:45.819613848Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:45.831923 containerd[1474]: time="2025-01-29T11:16:45.831526782Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:16:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:16:46.612918 kubelet[2751]: E0129 11:16:46.612224 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gxm42" podUID="f53165f6-06fd-4922-a608-d8be5ebba892" Jan 29 11:16:46.624025 sshd[4550]: Accepted publickey for core from 147.75.109.163 port 36464 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:46.626334 sshd-session[4550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:46.631951 systemd-logind[1452]: New session 23 of user core. Jan 29 11:16:46.638640 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:16:46.647920 containerd[1474]: time="2025-01-29T11:16:46.646151397Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:16:46.659568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263413898.mount: Deactivated successfully. Jan 29 11:16:46.661768 containerd[1474]: time="2025-01-29T11:16:46.661215085Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172\"" Jan 29 11:16:46.663253 containerd[1474]: time="2025-01-29T11:16:46.662152523Z" level=info msg="StartContainer for \"2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172\"" Jan 29 11:16:46.706356 systemd[1]: Started cri-containerd-2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172.scope - libcontainer container 2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172. Jan 29 11:16:46.741266 containerd[1474]: time="2025-01-29T11:16:46.741205074Z" level=info msg="StartContainer for \"2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172\" returns successfully" Jan 29 11:16:46.742518 systemd[1]: cri-containerd-2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172.scope: Deactivated successfully. Jan 29 11:16:46.772967 containerd[1474]: time="2025-01-29T11:16:46.772896087Z" level=info msg="shim disconnected" id=2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172 namespace=k8s.io Jan 29 11:16:46.772967 containerd[1474]: time="2025-01-29T11:16:46.772958407Z" level=warning msg="cleaning up after shim disconnected" id=2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172 namespace=k8s.io Jan 29 11:16:46.772967 containerd[1474]: time="2025-01-29T11:16:46.772967327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:47.303666 sshd[4638]: Connection closed by 147.75.109.163 port 36464 Jan 29 11:16:47.306278 sshd-session[4550]: pam_unix(sshd:session): session closed for user core Jan 29 11:16:47.312412 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:16:47.312997 systemd[1]: sshd@22-138.199.151.137:22-147.75.109.163:36464.service: Deactivated successfully. Jan 29 11:16:47.315909 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:16:47.318984 systemd-logind[1452]: Removed session 23. Jan 29 11:16:47.399552 kubelet[2751]: I0129 11:16:47.399472 2751 setters.go:600] "Node became not ready" node="ci-4152-2-0-b-e71ed2fe96" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:16:47Z","lastTransitionTime":"2025-01-29T11:16:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:16:47.482661 systemd[1]: Started sshd@23-138.199.151.137:22-147.75.109.163:36478.service - OpenSSH per-connection server daemon (147.75.109.163:36478). Jan 29 11:16:47.507204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2648640675e9f47db2f678d97aee146ee1aaa2ab9646c7eb1084501b41ba7172-rootfs.mount: Deactivated successfully. Jan 29 11:16:47.647903 containerd[1474]: time="2025-01-29T11:16:47.646965619Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:16:47.669182 containerd[1474]: time="2025-01-29T11:16:47.667136297Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8\"" Jan 29 11:16:47.669182 containerd[1474]: time="2025-01-29T11:16:47.668232655Z" level=info msg="StartContainer for \"ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8\"" Jan 29 11:16:47.709127 systemd[1]: Started cri-containerd-ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8.scope - libcontainer container ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8. Jan 29 11:16:47.743642 containerd[1474]: time="2025-01-29T11:16:47.743579739Z" level=info msg="StartContainer for \"ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8\" returns successfully" Jan 29 11:16:47.748347 systemd[1]: cri-containerd-ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8.scope: Deactivated successfully. Jan 29 11:16:47.768985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8-rootfs.mount: Deactivated successfully. Jan 29 11:16:47.776004 containerd[1474]: time="2025-01-29T11:16:47.775932871Z" level=info msg="shim disconnected" id=ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8 namespace=k8s.io Jan 29 11:16:47.776004 containerd[1474]: time="2025-01-29T11:16:47.775999071Z" level=warning msg="cleaning up after shim disconnected" id=ca2c0b044a1d8f190c2c78c15ad0306ab584c18f882764e7c5be321a7833feb8 namespace=k8s.io Jan 29 11:16:47.776004 containerd[1474]: time="2025-01-29T11:16:47.776008231Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:48.468011 sshd[4708]: Accepted publickey for core from 147.75.109.163 port 36478 ssh2: RSA SHA256:nclG6x2+CCPDg1J87dfSmoG85ir0BMjvhJKqcua3Jmo Jan 29 11:16:48.470049 sshd-session[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:16:48.477829 systemd-logind[1452]: New session 24 of user core. Jan 29 11:16:48.480113 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:16:48.613328 kubelet[2751]: E0129 11:16:48.611988 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gxm42" podUID="f53165f6-06fd-4922-a608-d8be5ebba892" Jan 29 11:16:48.657998 containerd[1474]: time="2025-01-29T11:16:48.657923475Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:16:48.680356 containerd[1474]: time="2025-01-29T11:16:48.680302869Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891\"" Jan 29 11:16:48.681096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954585413.mount: Deactivated successfully. Jan 29 11:16:48.682722 containerd[1474]: time="2025-01-29T11:16:48.682229185Z" level=info msg="StartContainer for \"cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891\"" Jan 29 11:16:48.724130 systemd[1]: Started cri-containerd-cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891.scope - libcontainer container cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891. Jan 29 11:16:48.767287 systemd[1]: cri-containerd-cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891.scope: Deactivated successfully. Jan 29 11:16:48.770703 containerd[1474]: time="2025-01-29T11:16:48.768660290Z" level=info msg="StartContainer for \"cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891\" returns successfully" Jan 29 11:16:48.797549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891-rootfs.mount: Deactivated successfully. Jan 29 11:16:48.805254 containerd[1474]: time="2025-01-29T11:16:48.805156417Z" level=info msg="shim disconnected" id=cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891 namespace=k8s.io Jan 29 11:16:48.805254 containerd[1474]: time="2025-01-29T11:16:48.805249216Z" level=warning msg="cleaning up after shim disconnected" id=cb4f0928a059198e5c49c0a3cb334e9a7f77cd06342a9085889cb1235ccd4891 namespace=k8s.io Jan 29 11:16:48.805254 containerd[1474]: time="2025-01-29T11:16:48.805259976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:48.823039 containerd[1474]: time="2025-01-29T11:16:48.822953701Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:16:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:16:49.663298 containerd[1474]: time="2025-01-29T11:16:49.663173275Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:16:49.685234 containerd[1474]: time="2025-01-29T11:16:49.685185672Z" level=info msg="CreateContainer within sandbox \"6ec62937e43b9bd649262e83980c3d76506b42f9df0d21c72cd2cf1a67fdccb2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d1f727c5c356cb0a9c8544d39ea9177459d2854bddcc91c06f21740a89a5ccc\"" Jan 29 11:16:49.688188 containerd[1474]: time="2025-01-29T11:16:49.687288828Z" level=info msg="StartContainer for \"0d1f727c5c356cb0a9c8544d39ea9177459d2854bddcc91c06f21740a89a5ccc\"" Jan 29 11:16:49.721097 systemd[1]: run-containerd-runc-k8s.io-0d1f727c5c356cb0a9c8544d39ea9177459d2854bddcc91c06f21740a89a5ccc-runc.FVWVfx.mount: Deactivated successfully. Jan 29 11:16:49.728057 systemd[1]: Started cri-containerd-0d1f727c5c356cb0a9c8544d39ea9177459d2854bddcc91c06f21740a89a5ccc.scope - libcontainer container 0d1f727c5c356cb0a9c8544d39ea9177459d2854bddcc91c06f21740a89a5ccc. Jan 29 11:16:49.765611 containerd[1474]: time="2025-01-29T11:16:49.765360634Z" level=info msg="StartContainer for \"0d1f727c5c356cb0a9c8544d39ea9177459d2854bddcc91c06f21740a89a5ccc\" returns successfully" Jan 29 11:16:50.086991 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:16:50.613212 kubelet[2751]: E0129 11:16:50.612588 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-gxm42" podUID="f53165f6-06fd-4922-a608-d8be5ebba892" Jan 29 11:16:50.645997 containerd[1474]: time="2025-01-29T11:16:50.645852493Z" level=info msg="StopPodSandbox for \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\"" Jan 29 11:16:50.646163 containerd[1474]: time="2025-01-29T11:16:50.646078653Z" level=info msg="TearDown network for sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" successfully" Jan 29 11:16:50.646163 containerd[1474]: time="2025-01-29T11:16:50.646106773Z" level=info msg="StopPodSandbox for \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" returns successfully" Jan 29 11:16:50.646827 containerd[1474]: time="2025-01-29T11:16:50.646752851Z" level=info msg="RemovePodSandbox for \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\"" Jan 29 11:16:50.646827 containerd[1474]: time="2025-01-29T11:16:50.646821211Z" level=info msg="Forcibly stopping sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\"" Jan 29 11:16:50.646999 containerd[1474]: time="2025-01-29T11:16:50.646905731Z" level=info msg="TearDown network for sandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" successfully" Jan 29 11:16:50.650550 containerd[1474]: time="2025-01-29T11:16:50.650501884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:50.650663 containerd[1474]: time="2025-01-29T11:16:50.650570604Z" level=info msg="RemovePodSandbox \"3cb5d374be666689d48d0ba099da1bb4fc76f74e7a688b5b7493fb0a679e07ad\" returns successfully" Jan 29 11:16:50.652469 containerd[1474]: time="2025-01-29T11:16:50.652076081Z" level=info msg="StopPodSandbox for \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\"" Jan 29 11:16:50.652469 containerd[1474]: time="2025-01-29T11:16:50.652184721Z" level=info msg="TearDown network for sandbox \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\" successfully" Jan 29 11:16:50.652469 containerd[1474]: time="2025-01-29T11:16:50.652206361Z" level=info msg="StopPodSandbox for \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\" returns successfully" Jan 29 11:16:50.653792 containerd[1474]: time="2025-01-29T11:16:50.653333639Z" level=info msg="RemovePodSandbox for \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\"" Jan 29 11:16:50.653792 containerd[1474]: time="2025-01-29T11:16:50.653385599Z" level=info msg="Forcibly stopping sandbox \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\"" Jan 29 11:16:50.653792 containerd[1474]: time="2025-01-29T11:16:50.653504438Z" level=info msg="TearDown network for sandbox \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\" successfully" Jan 29 11:16:50.666075 containerd[1474]: time="2025-01-29T11:16:50.665739255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:50.666075 containerd[1474]: time="2025-01-29T11:16:50.665899895Z" level=info msg="RemovePodSandbox \"ff3b4d3f31c52101e3fb72d6a0504aeca626a20f4980740f8e88e5746b7456b8\" returns successfully" Jan 29 11:16:52.959353 systemd-networkd[1370]: lxc_health: Link UP Jan 29 11:16:52.984231 systemd-networkd[1370]: lxc_health: Gained carrier Jan 29 11:16:53.610689 kubelet[2751]: I0129 11:16:53.610485 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6g5vm" podStartSLOduration=8.610467298 podStartE2EDuration="8.610467298s" podCreationTimestamp="2025-01-29 11:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:16:50.69961843 +0000 UTC m=+360.221620639" watchObservedRunningTime="2025-01-29 11:16:53.610467298 +0000 UTC m=+363.132469507" Jan 29 11:16:54.203119 systemd-networkd[1370]: lxc_health: Gained IPv6LL Jan 29 11:16:55.512222 systemd[1]: run-containerd-runc-k8s.io-0d1f727c5c356cb0a9c8544d39ea9177459d2854bddcc91c06f21740a89a5ccc-runc.o2V3c4.mount: Deactivated successfully. Jan 29 11:16:59.813775 systemd[1]: run-containerd-runc-k8s.io-0d1f727c5c356cb0a9c8544d39ea9177459d2854bddcc91c06f21740a89a5ccc-runc.w7m1EV.mount: Deactivated successfully. Jan 29 11:17:00.031008 sshd[4766]: Connection closed by 147.75.109.163 port 36478 Jan 29 11:17:00.032441 sshd-session[4708]: pam_unix(sshd:session): session closed for user core Jan 29 11:17:00.036789 systemd[1]: sshd@23-138.199.151.137:22-147.75.109.163:36478.service: Deactivated successfully. Jan 29 11:17:00.039378 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:17:00.041802 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:17:00.043192 systemd-logind[1452]: Removed session 24. Jan 29 11:17:15.774277 kubelet[2751]: E0129 11:17:15.774126 2751 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59990->10.0.0.2:2379: read: connection timed out" Jan 29 11:17:15.782617 systemd[1]: cri-containerd-59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd.scope: Deactivated successfully. Jan 29 11:17:15.782948 systemd[1]: cri-containerd-59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd.scope: Consumed 2.424s CPU time, 15.6M memory peak, 0B memory swap peak. Jan 29 11:17:15.805718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd-rootfs.mount: Deactivated successfully. Jan 29 11:17:15.813708 containerd[1474]: time="2025-01-29T11:17:15.813536902Z" level=info msg="shim disconnected" id=59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd namespace=k8s.io Jan 29 11:17:15.813708 containerd[1474]: time="2025-01-29T11:17:15.813613982Z" level=warning msg="cleaning up after shim disconnected" id=59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd namespace=k8s.io Jan 29 11:17:15.813708 containerd[1474]: time="2025-01-29T11:17:15.813623142Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:17:16.742941 kubelet[2751]: I0129 11:17:16.742237 2751 scope.go:117] "RemoveContainer" containerID="59f1e069ab48a4673e01102bf35a3c048c70790bd8a39a089689b4a97f5938dd" Jan 29 11:17:16.744861 containerd[1474]: time="2025-01-29T11:17:16.744762496Z" level=info msg="CreateContainer within sandbox \"0298116b07414285d92d425febc2c8cc9b4f5c5c3e6ec5acc888ca72e9b2205c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 11:17:16.758647 containerd[1474]: time="2025-01-29T11:17:16.758595766Z" level=info msg="CreateContainer within sandbox \"0298116b07414285d92d425febc2c8cc9b4f5c5c3e6ec5acc888ca72e9b2205c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"01990cebb3e7169e2a6801821e652f0869e50ea6399b0062920fde1cba95564a\"" Jan 29 11:17:16.759321 containerd[1474]: time="2025-01-29T11:17:16.759283446Z" level=info msg="StartContainer for \"01990cebb3e7169e2a6801821e652f0869e50ea6399b0062920fde1cba95564a\"" Jan 29 11:17:16.795319 systemd[1]: Started cri-containerd-01990cebb3e7169e2a6801821e652f0869e50ea6399b0062920fde1cba95564a.scope - libcontainer container 01990cebb3e7169e2a6801821e652f0869e50ea6399b0062920fde1cba95564a. Jan 29 11:17:16.841391 containerd[1474]: time="2025-01-29T11:17:16.841225629Z" level=info msg="StartContainer for \"01990cebb3e7169e2a6801821e652f0869e50ea6399b0062920fde1cba95564a\" returns successfully" Jan 29 11:17:16.864727 systemd[1]: cri-containerd-9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e.scope: Deactivated successfully. Jan 29 11:17:16.865724 systemd[1]: cri-containerd-9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e.scope: Consumed 6.725s CPU time, 19.6M memory peak, 0B memory swap peak. Jan 29 11:17:16.900006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e-rootfs.mount: Deactivated successfully. Jan 29 11:17:16.906531 containerd[1474]: time="2025-01-29T11:17:16.906186225Z" level=info msg="shim disconnected" id=9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e namespace=k8s.io Jan 29 11:17:16.906531 containerd[1474]: time="2025-01-29T11:17:16.906240865Z" level=warning msg="cleaning up after shim disconnected" id=9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e namespace=k8s.io Jan 29 11:17:16.906531 containerd[1474]: time="2025-01-29T11:17:16.906249585Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:17:17.118422 kubelet[2751]: I0129 11:17:17.117313 2751 status_manager.go:851] "Failed to get status for pod" podUID="345b4fcf7d0b600eda1396915ca8fd57" pod="kube-system/kube-apiserver-ci-4152-2-0-b-e71ed2fe96" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59904->10.0.0.2:2379: read: connection timed out" Jan 29 11:17:17.751917 kubelet[2751]: I0129 11:17:17.751185 2751 scope.go:117] "RemoveContainer" containerID="9eb8e78a068be5fcde43ebd34e9a7f73d75f429b65d5720c70c5cd0df63cdf0e" Jan 29 11:17:17.757311 containerd[1474]: time="2025-01-29T11:17:17.757254233Z" level=info msg="CreateContainer within sandbox \"19ffc8f5850fe692071229398b84f171be4cda4344c6b1418721848dd4fd15ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 11:17:17.783970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653445625.mount: Deactivated successfully. Jan 29 11:17:17.791491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971555895.mount: Deactivated successfully. Jan 29 11:17:17.793482 containerd[1474]: time="2025-01-29T11:17:17.793235370Z" level=info msg="CreateContainer within sandbox \"19ffc8f5850fe692071229398b84f171be4cda4344c6b1418721848dd4fd15ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"70d05d608a52bf887765ea4766a982c307ac04a6a0e4db3b5a5b2efef9a79b1b\"" Jan 29 11:17:17.794789 containerd[1474]: time="2025-01-29T11:17:17.794644889Z" level=info msg="StartContainer for \"70d05d608a52bf887765ea4766a982c307ac04a6a0e4db3b5a5b2efef9a79b1b\"" Jan 29 11:17:17.837097 systemd[1]: Started cri-containerd-70d05d608a52bf887765ea4766a982c307ac04a6a0e4db3b5a5b2efef9a79b1b.scope - libcontainer container 70d05d608a52bf887765ea4766a982c307ac04a6a0e4db3b5a5b2efef9a79b1b. Jan 29 11:17:17.881614 containerd[1474]: time="2025-01-29T11:17:17.881443753Z" level=info msg="StartContainer for \"70d05d608a52bf887765ea4766a982c307ac04a6a0e4db3b5a5b2efef9a79b1b\" returns successfully" Jan 29 11:17:21.150144 kubelet[2751]: E0129 11:17:21.149844 2751 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59796->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-0-b-e71ed2fe96.181f25b2b69c8eb8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-0-b-e71ed2fe96,UID:345b4fcf7d0b600eda1396915ca8fd57,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-b-e71ed2fe96,},FirstTimestamp:2025-01-29 11:17:10.686420664 +0000 UTC m=+380.208422913,LastTimestamp:2025-01-29 11:17:10.686420664 +0000 UTC m=+380.208422913,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-b-e71ed2fe96,}"